Job:
periodic-ci-openshift-release-main-ci-4.14-e2e-network-migration-rollback (all) - 1 runs, 0% failed, 100% of runs match
#2046274884337668096junit6 minutes ago
2026-04-20T19:05:12Z node/ip-10-0-78-13.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-z61rskrl-1cf3c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-78-13.us-west-2.compute.internal?timeout=10s - read tcp 10.0.78.13:53698->10.0.27.229:6443: read: connection reset by peer
2026-04-20T19:10:24Z node/ip-10-0-60-16.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-z61rskrl-1cf3c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-16.us-west-2.compute.internal?timeout=10s - read tcp 10.0.60.16:49176->10.0.27.229:6443: read: connection reset by peer
2026-04-20T19:17:28Z node/ip-10-0-66-203.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-z61rskrl-1cf3c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-66-203.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T19:17:28Z node/ip-10-0-78-13.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-z61rskrl-1cf3c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-78-13.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T19:18:30Z node/ip-10-0-87-91.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-z61rskrl-1cf3c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-87-91.us-west-2.compute.internal?timeout=10s - read tcp 10.0.87.91:51414->10.0.116.177:6443: read: connection reset by peer
2026-04-20T19:18:40Z node/ip-10-0-78-13.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-z61rskrl-1cf3c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-78-13.us-west-2.compute.internal?timeout=10s - read tcp 10.0.78.13:46996->10.0.116.177:6443: read: connection reset by peer
2026-04-20T19:18:40Z node/ip-10-0-66-203.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-z61rskrl-1cf3c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-66-203.us-west-2.compute.internal?timeout=10s - read tcp 10.0.66.203:35012->10.0.116.177:6443: read: connection reset by peer
2026-04-20T19:20:52Z node/ip-10-0-116-150.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-z61rskrl-1cf3c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-116-150.us-west-2.compute.internal?timeout=10s - read tcp 10.0.116.150:59886->10.0.116.177:6443: read: connection reset by peer
2026-04-20T19:20:54Z node/ip-10-0-44-0.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-z61rskrl-1cf3c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-44-0.us-west-2.compute.internal?timeout=10s - read tcp 10.0.44.0:59682->10.0.27.229:6443: read: connection reset by peer
2026-04-20T19:20:58Z node/ip-10-0-66-203.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-z61rskrl-1cf3c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-66-203.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-fencing-dualstack-recovery (all) - 4 runs, 50% failed, 150% of failures match = 75% impact
#2046277928433487872junit6 minutes ago
2026-04-20T18:51:20Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:45706->192.168.111.5:6443: read: connection reset by peer
2026-04-20T18:51:30Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - context deadline exceeded
#2045915092809158656junit24 hours ago
2026-04-19T19:25:04Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - http2: client connection lost
2026-04-19T19:30:04Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.21:34534->192.168.111.5:6443: read: connection reset by peer
2026-04-19T19:30:14Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045915092809158656junit24 hours ago
2026-04-19T20:13:22Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - context deadline exceeded
2026-04-19T20:29:00Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:56238->192.168.111.5:6443: read: connection reset by peer
2026-04-19T20:29:10Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045764096418123776junit34 hours ago
2026-04-19T08:54:23Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.21:59072->192.168.111.5:6443: read: connection reset by peer
2026-04-19T09:00:30Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T09:00:40Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T09:15:42Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.21:45642->192.168.111.5:6443: read: connection reset by peer
2026-04-19T09:15:52Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-nightly-4.20-e2e-aws-ovn-tls-13 (all) - 1 runs, 0% failed, 100% of runs match
#2046276896726978560junit9 minutes ago
2026-04-20T19:19:02Z node/ip-10-0-87-214.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3rsk5jyh-d9cf4.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-87-214.ec2.internal?timeout=10s - context deadline exceeded
2026-04-20T19:22:37Z node/ip-10-0-26-139.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3rsk5jyh-d9cf4.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-139.ec2.internal?timeout=10s - read tcp 10.0.26.139:42042->10.0.7.49:6443: read: connection reset by peer
2026-04-20T19:23:27Z node/ip-10-0-87-113.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3rsk5jyh-d9cf4.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-87-113.ec2.internal?timeout=10s - read tcp 10.0.87.113:44198->10.0.74.99:6443: read: connection reset by peer
2026-04-20T19:28:51Z node/ip-10-0-87-214.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3rsk5jyh-d9cf4.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-87-214.ec2.internal?timeout=10s - read tcp 10.0.87.214:47820->10.0.74.99:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.14-e2e-aws-ovn-network-migration (all) - 1 runs, 0% failed, 100% of runs match
#2046274823109218304junit9 minutes ago
2026-04-20T19:04:32Z node/ip-10-0-105-253.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-91m21yy0-8a43b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-105-253.us-west-1.compute.internal?timeout=10s - read tcp 10.0.105.253:44606->10.0.30.44:6443: read: connection reset by peer
2026-04-20T19:09:56Z node/ip-10-0-48-68.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-91m21yy0-8a43b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-48-68.us-west-1.compute.internal?timeout=10s - read tcp 10.0.48.68:58428->10.0.30.44:6443: read: connection reset by peer
2026-04-20T19:14:22Z node/ip-10-0-48-68.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-91m21yy0-8a43b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-48-68.us-west-1.compute.internal?timeout=10s - read tcp 10.0.48.68:52652->10.0.30.44:6443: read: connection reset by peer
2026-04-20T19:17:25Z node/ip-10-0-44-11.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-91m21yy0-8a43b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-44-11.us-west-1.compute.internal?timeout=10s - read tcp 10.0.44.11:47236->10.0.30.44:6443: read: connection reset by peer
2026-04-20T19:20:20Z node/ip-10-0-110-63.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-91m21yy0-8a43b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-110-63.us-west-1.compute.internal?timeout=10s - read tcp 10.0.110.63:42204->10.0.84.32:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.20-e2e-aws-ovn-fips (all) - 7 runs, 0% failed, 71% of runs match
#2046288243665670144junit9 minutes ago
2026-04-20T19:34:29Z node/ip-10-0-108-52.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-878gww7s-26143.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-108-52.ec2.internal?timeout=10s - read tcp 10.0.108.52:44596->10.0.62.16:6443: read: connection reset by peer
#2046033479585501184junit18 hours ago
2026-04-20T02:00:56Z node/ip-10-0-52-161.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fgfyskvt-26143.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-161.ec2.internal?timeout=10s - read tcp 10.0.52.161:47730->10.0.51.158:6443: read: connection reset by peer
2026-04-20T02:01:02Z node/ip-10-0-85-206.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fgfyskvt-26143.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-85-206.ec2.internal?timeout=10s - read tcp 10.0.85.206:59094->10.0.51.158:6443: read: connection reset by peer
#2045949640649478144junit23 hours ago
2026-04-19T20:31:40Z node/ip-10-0-103-64.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-nbi9kqwb-26143.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-64.ec2.internal?timeout=10s - read tcp 10.0.103.64:50860->10.0.68.121:6443: read: connection reset by peer
#2045679503849558016junit41 hours ago
2026-04-19T02:37:06Z node/ip-10-0-124-3.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jk16k7cp-26143.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-124-3.us-west-1.compute.internal?timeout=10s - read tcp 10.0.124.3:51788->10.0.7.179:6443: read: connection reset by peer
#2045600227305459712junit46 hours ago
2026-04-18T21:23:25Z node/ip-10-0-58-87.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bfqtl9x3-26143.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-87.ec2.internal?timeout=10s - read tcp 10.0.58.87:46182->10.0.119.139:6443: read: connection reset by peer
periodic-ci-openshift-multiarch-main-nightly-4.20-upgrade-from-stable-4.19-ocp-e2e-aws-ovn-upgrade-multi-x-ax (all) - 7 runs, 14% failed, 600% of failures match = 86% impact
#2046294668584423425junit12 minutes ago
2026-04-20T19:46:40Z node/ip-10-0-40-148.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-9nnw3bb0-fb5e4.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-40-148.us-west-2.compute.internal?timeout=10s - read tcp 10.0.40.148:42928->10.0.120.217:6443: read: connection reset by peer
2026-04-20T19:46:41Z node/ip-10-0-38-42.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-9nnw3bb0-fb5e4.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-38-42.us-west-2.compute.internal?timeout=10s - read tcp 10.0.38.42:34334->10.0.62.100:6443: read: connection reset by peer
2026-04-20T19:50:34Z node/ip-10-0-57-172.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-9nnw3bb0-fb5e4.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-57-172.us-west-2.compute.internal?timeout=10s - read tcp 10.0.57.172:45816->10.0.120.217:6443: read: connection reset by peer
2026-04-20T20:29:27Z node/ip-10-0-55-182.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-9nnw3bb0-fb5e4.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-55-182.us-west-2.compute.internal?timeout=10s - read tcp 10.0.55.182:39480->10.0.120.217:6443: read: connection reset by peer
2026-04-20T20:29:34Z node/ip-10-0-38-42.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-9nnw3bb0-fb5e4.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-38-42.us-west-2.compute.internal?timeout=10s - read tcp 10.0.38.42:53710->10.0.120.217:6443: read: connection reset by peer
2026-04-20T20:43:12Z node/ip-10-0-117-117.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-9nnw3bb0-fb5e4.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-117-117.us-west-2.compute.internal?timeout=10s - read tcp 10.0.117.117:53478->10.0.120.217:6443: read: connection reset by peer
2026-04-20T20:43:14Z node/ip-10-0-20-177.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-9nnw3bb0-fb5e4.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-20-177.us-west-2.compute.internal?timeout=10s - read tcp 10.0.20.177:38964->10.0.120.217:6443: read: connection reset by peer
#2046174125692555264junit8 hours ago
2026-04-20T11:42:11Z node/ip-10-0-40-149.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zjx5ccds-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-40-149.ec2.internal?timeout=10s - read tcp 10.0.40.149:36406->10.0.124.147:6443: read: connection reset by peer
2026-04-20T11:42:18Z node/ip-10-0-24-173.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zjx5ccds-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-173.ec2.internal?timeout=10s - read tcp 10.0.24.173:39310->10.0.124.147:6443: read: connection reset by peer
2026-04-20T11:42:21Z node/ip-10-0-40-149.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zjx5ccds-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-40-149.ec2.internal?timeout=10s - read tcp 10.0.40.149:37124->10.0.124.147:6443: read: connection reset by peer
2026-04-20T11:46:06Z node/ip-10-0-40-149.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zjx5ccds-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-40-149.ec2.internal?timeout=10s - read tcp 10.0.40.149:59080->10.0.11.33:6443: read: connection reset by peer
2026-04-20T11:50:18Z node/ip-10-0-90-28.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zjx5ccds-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-28.ec2.internal?timeout=10s - read tcp 10.0.90.28:44286->10.0.124.147:6443: read: connection reset by peer
2026-04-20T12:27:26Z node/ip-10-0-90-28.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zjx5ccds-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-28.ec2.internal?timeout=10s - read tcp 10.0.90.28:44336->10.0.124.147:6443: read: connection reset by peer
2026-04-20T12:42:06Z node/ip-10-0-112-30.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zjx5ccds-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-112-30.ec2.internal?timeout=10s - read tcp 10.0.112.30:36694->10.0.11.33:6443: read: connection reset by peer
2026-04-20T12:42:18Z node/ip-10-0-31-118.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zjx5ccds-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-31-118.ec2.internal?timeout=10s - read tcp 10.0.31.118:41498->10.0.11.33:6443: read: connection reset by peer
#2046041672382418944junit17 hours ago
Apr 20 03:39:02.815 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source: failed to apply / update (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source: Patch "https://api-int.ci-op-gqxvy8m0-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/apps/v1/namespaces/openshift-network-diagnostics/deployments/network-check-source?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.81.215:59816->10.0.80.62:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 20 03:39:02.815 - 28s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source: failed to apply / update (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source: Patch "https://api-int.ci-op-gqxvy8m0-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/apps/v1/namespaces/openshift-network-diagnostics/deployments/network-check-source?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.81.215:59816->10.0.80.62:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 20 03:39:30.827 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2046041672382418944junit17 hours ago
2026-04-20T02:56:48Z node/ip-10-0-92-171.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gqxvy8m0-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-92-171.ec2.internal?timeout=10s - read tcp 10.0.92.171:39324->10.0.80.62:6443: read: connection reset by peer
2026-04-20T02:56:55Z node/ip-10-0-45-83.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gqxvy8m0-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-45-83.ec2.internal?timeout=10s - read tcp 10.0.45.83:45160->10.0.80.62:6443: read: connection reset by peer
2026-04-20T03:35:34Z node/ip-10-0-45-83.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gqxvy8m0-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-45-83.ec2.internal?timeout=10s - context deadline exceeded
#2046041672382418944junit17 hours ago
2026-04-20T03:35:35Z node/ip-10-0-92-171.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gqxvy8m0-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-92-171.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T03:39:41Z node/ip-10-0-4-162.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gqxvy8m0-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-4-162.ec2.internal?timeout=10s - read tcp 10.0.4.162:60308->10.0.52.196:6443: read: connection reset by peer
2026-04-20T03:46:22Z node/ip-10-0-15-175.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gqxvy8m0-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-15-175.ec2.internal?timeout=10s - read tcp 10.0.15.175:54040->10.0.52.196:6443: read: connection reset by peer
2026-04-20T03:53:07Z node/ip-10-0-4-162.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gqxvy8m0-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-4-162.ec2.internal?timeout=10s - read tcp 10.0.4.162:44592->10.0.52.196:6443: read: connection reset by peer
#2045956882429906944junit22 hours ago
2026-04-19T21:05:43Z node/ip-10-0-127-63.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s96yth6i-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-63.us-east-2.compute.internal?timeout=10s - read tcp 10.0.127.63:55988->10.0.10.111:6443: read: connection reset by peer
2026-04-19T21:05:43Z node/ip-10-0-112-97.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s96yth6i-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-112-97.us-east-2.compute.internal?timeout=10s - read tcp 10.0.112.97:46418->10.0.10.111:6443: read: connection reset by peer
2026-04-19T21:05:44Z node/ip-10-0-49-175.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s96yth6i-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-49-175.us-east-2.compute.internal?timeout=10s - read tcp 10.0.49.175:35188->10.0.85.189:6443: read: connection reset by peer
2026-04-19T21:33:58Z node/ip-10-0-58-7.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s96yth6i-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-7.us-east-2.compute.internal?timeout=10s - read tcp 10.0.58.7:60890->10.0.85.189:6443: read: connection reset by peer
2026-04-19T21:34:16Z node/ip-10-0-90-81.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s96yth6i-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-81.us-east-2.compute.internal?timeout=10s - read tcp 10.0.90.81:40948->10.0.85.189:6443: read: connection reset by peer
2026-04-19T21:38:19Z node/ip-10-0-4-26.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s96yth6i-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-4-26.us-east-2.compute.internal?timeout=10s - read tcp 10.0.4.26:42260->10.0.85.189:6443: read: connection reset by peer
2026-04-19T21:38:21Z node/ip-10-0-112-97.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s96yth6i-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-112-97.us-east-2.compute.internal?timeout=10s - read tcp 10.0.112.97:43244->10.0.85.189:6443: read: connection reset by peer
2026-04-19T21:38:27Z node/ip-10-0-49-175.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s96yth6i-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-49-175.us-east-2.compute.internal?timeout=10s - read tcp 10.0.49.175:42650->10.0.10.111:6443: read: connection reset by peer
2026-04-19T21:42:20Z node/ip-10-0-112-168.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s96yth6i-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-112-168.us-east-2.compute.internal?timeout=10s - read tcp 10.0.112.168:32902->10.0.10.111:6443: read: connection reset by peer
2026-04-19T21:42:55Z node/ip-10-0-90-81.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s96yth6i-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-81.us-east-2.compute.internal?timeout=10s - read tcp 10.0.90.81:40810->10.0.85.189:6443: read: connection reset by peer
2026-04-19T22:23:25Z node/ip-10-0-112-97.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s96yth6i-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-112-97.us-east-2.compute.internal?timeout=10s - read tcp 10.0.112.97:44180->10.0.85.189:6443: read: connection reset by peer
2026-04-19T22:31:24Z node/ip-10-0-49-175.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s96yth6i-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-49-175.us-east-2.compute.internal?timeout=10s - read tcp 10.0.49.175:38752->10.0.85.189:6443: read: connection reset by peer
2026-04-19T22:31:29Z node/ip-10-0-24-2.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s96yth6i-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-2.us-east-2.compute.internal?timeout=10s - read tcp 10.0.24.2:53516->10.0.85.189:6443: read: connection reset by peer
2026-04-19T22:34:42Z node/ip-10-0-112-97.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s96yth6i-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-112-97.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045822618916884480junit31 hours ago
2026-04-19T12:12:35Z node/ip-10-0-107-40.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1xfgwpii-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-107-40.us-west-2.compute.internal?timeout=10s - read tcp 10.0.107.40:52026->10.0.60.81:6443: read: connection reset by peer
2026-04-19T12:16:45Z node/ip-10-0-20-218.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1xfgwpii-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-20-218.us-west-2.compute.internal?timeout=10s - read tcp 10.0.20.218:47062->10.0.60.81:6443: read: connection reset by peer
2026-04-19T12:17:12Z node/ip-10-0-6-141.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1xfgwpii-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-141.us-west-2.compute.internal?timeout=10s - read tcp 10.0.6.141:59022->10.0.81.105:6443: read: connection reset by peer
2026-04-19T12:42:24Z node/ip-10-0-6-141.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1xfgwpii-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-141.us-west-2.compute.internal?timeout=10s - read tcp 10.0.6.141:43260->10.0.81.105:6443: read: connection reset by peer
2026-04-19T12:46:24Z node/ip-10-0-91-19.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1xfgwpii-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-91-19.us-west-2.compute.internal?timeout=10s - read tcp 10.0.91.19:51498->10.0.60.81:6443: read: connection reset by peer
2026-04-19T13:36:35Z node/ip-10-0-91-19.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1xfgwpii-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-91-19.us-west-2.compute.internal?timeout=10s - read tcp 10.0.91.19:33256->10.0.81.105:6443: read: connection reset by peer
#2045686171102613504junit41 hours ago
2026-04-19T02:55:28Z node/ip-10-0-90-14.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7ihht0w0-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-14.ec2.internal?timeout=10s - read tcp 10.0.90.14:50526->10.0.122.69:6443: read: connection reset by peer
2026-04-19T02:55:39Z node/ip-10-0-127-48.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7ihht0w0-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-48.ec2.internal?timeout=10s - read tcp 10.0.127.48:35974->10.0.122.69:6443: read: connection reset by peer
2026-04-19T02:56:03Z node/ip-10-0-44-174.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7ihht0w0-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-44-174.ec2.internal?timeout=10s - read tcp 10.0.44.174:60758->10.0.122.69:6443: read: connection reset by peer
2026-04-19T03:21:44Z node/ip-10-0-23-102.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7ihht0w0-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-102.ec2.internal?timeout=10s - read tcp 10.0.23.102:43198->10.0.122.69:6443: read: connection reset by peer
2026-04-19T03:25:30Z node/ip-10-0-25-85.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7ihht0w0-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-85.ec2.internal?timeout=10s - read tcp 10.0.25.85:47722->10.0.41.219:6443: read: connection reset by peer
2026-04-19T03:25:35Z node/ip-10-0-127-48.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7ihht0w0-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-48.ec2.internal?timeout=10s - read tcp 10.0.127.48:39538->10.0.41.219:6443: read: connection reset by peer
2026-04-19T03:25:42Z node/ip-10-0-44-174.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7ihht0w0-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-44-174.ec2.internal?timeout=10s - read tcp 10.0.44.174:43782->10.0.41.219:6443: read: connection reset by peer
2026-04-19T03:29:35Z node/ip-10-0-25-85.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7ihht0w0-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-85.ec2.internal?timeout=10s - read tcp 10.0.25.85:45150->10.0.122.69:6443: read: connection reset by peer
2026-04-19T03:29:41Z node/ip-10-0-55-41.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7ihht0w0-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-55-41.ec2.internal?timeout=10s - read tcp 10.0.55.41:57800->10.0.122.69:6443: read: connection reset by peer
2026-04-19T03:29:44Z node/ip-10-0-59-104.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7ihht0w0-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-59-104.ec2.internal?timeout=10s - read tcp 10.0.59.104:47412->10.0.122.69:6443: read: connection reset by peer
2026-04-19T04:06:12Z node/ip-10-0-90-14.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7ihht0w0-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-14.ec2.internal?timeout=10s - read tcp 10.0.90.14:48524->10.0.41.219:6443: read: connection reset by peer
2026-04-19T04:09:41Z node/ip-10-0-55-41.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7ihht0w0-fb5e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-55-41.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-ci-4.15-upgrade-from-stable-4.14-from-stable-4.13-e2e-aws-sdn-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046275164018053120junit11 minutes ago
2026-04-20T18:57:10Z node/ip-10-0-160-184.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-y4gnspqm-35727.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-160-184.ec2.internal?timeout=10s - read tcp 10.0.160.184:53178->10.0.129.44:6443: read: connection reset by peer
2026-04-20T19:39:02Z node/ip-10-0-173-66.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-y4gnspqm-35727.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-173-66.ec2.internal?timeout=10s - read tcp 10.0.173.66:34946->10.0.199.46:6443: read: connection reset by peer
2026-04-20T19:45:59Z node/ip-10-0-177-181.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-y4gnspqm-35727.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-177-181.ec2.internal?timeout=10s - read tcp 10.0.177.181:40080->10.0.199.46:6443: read: connection reset by peer
2026-04-20T20:50:36Z node/ip-10-0-180-14.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-y4gnspqm-35727.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-180-14.ec2.internal?timeout=10s - read tcp 10.0.180.14:45570->10.0.129.44:6443: read: connection reset by peer
2026-04-20T20:56:36Z node/ip-10-0-233-17.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-y4gnspqm-35727.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-233-17.ec2.internal?timeout=10s - read tcp 10.0.233.17:54928->10.0.199.46:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.17-e2e-aws-ovn-upgrade (all) - 6 runs, 17% failed, 100% of failures match = 17% impact
#2046285315211005952junit13 minutes ago
E0420 19:01:12.345864       1 leaderelection.go:347] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-hq91nqrv-531a8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.118.41:6443: connect: connection refused
E0420 19:11:15.688813       1 request.go:1116] Unexpected error when reading response body: read tcp 10.0.4.86:44178->10.0.26.16:6443: read: connection reset by peer
E0420 19:11:15.688893       1 leaderelection.go:347] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: unexpected error when reading response body. Please retry. Original error: read tcp 10.0.4.86:44178->10.0.26.16:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.14-upgrade-from-stable-4.13-e2e-aws-upgrade-ovn-single-node (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046276050240933888junit14 minutes ago
Apr 20 18:29:52.212 - 4s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/55e4b4bf-034e-4024-a662-b1024ad775ca backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-7xc1g3b0-1662f.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": net/http: TLS handshake timeout
Apr 20 18:29:56.213 - 1s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/fb0b57c2-93f8-453a-85e1-a83b56f6988a backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-7xc1g3b0-1662f.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.86.101:47002->18.213.11.17:6443: read: connection reset by peer
Apr 20 18:29:57.213 - 999ms E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/35fd3d18-de20-4ec4-97fa-6fbb19198867 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-7xc1g3b0-1662f.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.86.101:59190->100.25.18.141:6443: read: connection reset by peer
Apr 20 18:29:58.212 - 999ms E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/72fb1cfd-5c42-471c-8b21-3405349f3332 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-7xc1g3b0-1662f.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.86.101:59234->100.25.18.141:6443: read: connection reset by peer
Apr 20 18:29:59.212 - 1s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/7759707a-8f31-4de6-ba97-7333fa8a5132 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-7xc1g3b0-1662f.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.86.101:47194->18.213.11.17:6443: read: connection reset by peer
Apr 20 18:30:00.213 - 1s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/784ed8d6-71bc-450d-bb26-17e85baf77ff backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-7xc1g3b0-1662f.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.86.101:59296->100.25.18.141:6443: read: connection reset by peer
Apr 20 18:30:01.213 - 999ms E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/8fa0b4ef-324c-4068-a74f-361a267396af backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-7xc1g3b0-1662f.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.86.101:41414->18.213.11.17:6443: read: connection reset by peer
Apr 20 18:30:02.212 - 1s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/9959e320-3be2-4df8-b74e-55c1c4d6330c backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-7xc1g3b0-1662f.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.86.101:33572->100.25.18.141:6443: read: connection reset by peer
Apr 20 18:30:03.213 - 999ms E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/c4188356-abef-4e87-a969-9d7e53f15ed8 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-7xc1g3b0-1662f.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.86.101:41626->18.213.11.17:6443: read: connection reset by peer
Apr 20 18:30:04.212 - 1s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/125d628f-6a29-4350-861e-e414a7523a49 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-7xc1g3b0-1662f.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.86.101:33654->100.25.18.141:6443: read: connection reset by peer
Apr 20 18:30:05.212 - 1s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/e7c3903e-9465-4f5e-b1cc-986a9d33304a backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-7xc1g3b0-1662f.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.86.101:33756->100.25.18.141:6443: read: connection reset by peer
Apr 20 18:30:06.213 - 1s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/99f45200-9c52-48ae-b1b5-864229f9f073 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-7xc1g3b0-1662f.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.86.101:41766->18.213.11.17:6443: read: connection reset by peer
Apr 20 18:30:07.213 - 999ms E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/88d92878-4303-48b1-8ff5-662e4a705d98 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-7xc1g3b0-1662f.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.86.101:33870->100.25.18.141:6443: read: connection reset by peer
Apr 20 18:30:08.212 - 1s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/15ca873d-00cb-4f4e-8ed6-eb2f7f5e96b8 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-7xc1g3b0-1662f.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.86.101:41884->18.213.11.17:6443: read: connection reset by peer
Apr 20 18:30:09.213 - 999ms E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/6b7fdb10-8dcf-4963-a33a-e2cf295056b4 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-7xc1g3b0-1662f.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.86.101:41936->18.213.11.17:6443: read: connection reset by peer

... 3 lines not shown

periodic-ci-openshift-multiarch-main-nightly-4.20-ocp-e2e-ovn-serial-aws-multi-a-a-1of2 (all) - 7 runs, 0% failed, 29% of runs match
#2046293959226953728junit16 minutes ago
2026-04-20T19:12:35Z node/ip-10-0-114-229.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ndbrzfqr-684ac.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-114-229.us-west-2.compute.internal?timeout=10s - read tcp 10.0.114.229:44774->10.0.70.53:6443: read: connection reset by peer
2026-04-20T19:15:59Z node/ip-10-0-104-211.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ndbrzfqr-684ac.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-104-211.us-west-2.compute.internal?timeout=10s - read tcp 10.0.104.211:52340->10.0.24.22:6443: read: connection reset by peer
2026-04-20T19:16:09Z node/ip-10-0-114-229.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ndbrzfqr-684ac.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-114-229.us-west-2.compute.internal?timeout=10s - read tcp 10.0.114.229:50560->10.0.70.53:6443: read: connection reset by peer
2026-04-20T19:16:15Z node/ip-10-0-36-233.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ndbrzfqr-684ac.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-233.us-west-2.compute.internal?timeout=10s - read tcp 10.0.36.233:41122->10.0.24.22:6443: read: connection reset by peer
#2045686220448600064junit41 hours ago
2026-04-19T03:02:11Z node/ip-10-0-23-84.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-bb2sirik-684ac.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-84.us-west-1.compute.internal?timeout=10s - read tcp 10.0.23.84:49376->10.0.59.231:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.14-e2e-aws-upgrade-ovn-single-node (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046274845766848512junit18 minutes ago
Apr 20 18:49:58.447 - 15s   E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/e4f9d928-226e-4301-b641-b610d65cdafb backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-jqnp8fg1-d4394.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": EOF
Apr 20 18:50:14.446 - 1s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/6f709c14-3625-4638-bebc-febc9152eb64 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-jqnp8fg1-d4394.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.146.78:33246->16.146.207.154:6443: read: connection reset by peer
Apr 20 18:50:15.447 - 5s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/d75772f6-2385-4173-b542-dc98c1f194f1 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-jqnp8fg1-d4394.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": net/http: TLS handshake timeout
Apr 20 18:50:20.447 - 999ms E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/1398c6c9-96e1-4ca3-86a3-9db11fd5748f backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-jqnp8fg1-d4394.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.146.78:47436->16.146.207.154:6443: read: connection reset by peer
Apr 20 18:50:21.446 - 1s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/1ab23922-068a-474e-822c-695a3b1723bc backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-jqnp8fg1-d4394.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.146.78:47548->16.146.207.154:6443: read: connection reset by peer
Apr 20 18:50:22.447 - 999ms E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/90a5b42c-a380-43a1-a2a2-e9c6d9eabaeb backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-jqnp8fg1-d4394.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.146.78:47650->16.146.207.154:6443: read: connection reset by peer
Apr 20 18:50:23.447 - 999ms E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/222a88af-733b-436c-a712-fd19b5fd8e52 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-jqnp8fg1-d4394.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.146.78:47722->16.146.207.154:6443: read: connection reset by peer
Apr 20 18:50:24.447 - 999ms E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/1bb613e0-1938-4abd-bb32-284c07368da5 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-jqnp8fg1-d4394.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.146.78:47846->16.146.207.154:6443: read: connection reset by peer
Apr 20 18:50:25.447 - 999ms E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/44e31dc4-563b-49c7-abd1-b1bc1a7f9753 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-jqnp8fg1-d4394.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.146.78:47964->16.146.207.154:6443: read: connection reset by peer
Apr 20 18:50:26.446 - 1s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/97215a6c-528b-4ed2-9416-a4dd19881d9e backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-jqnp8fg1-d4394.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.146.78:48046->16.146.207.154:6443: read: connection reset by peer
Apr 20 18:50:27.447 - 1s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/52bbb3fb-b308-47dd-b01e-34ade53d0668 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-jqnp8fg1-d4394.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.146.78:54700->16.146.207.154:6443: read: connection reset by peer
Apr 20 18:50:28.447 - 999ms E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/38088d06-920b-4a91-af4a-74694094b2a5 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-jqnp8fg1-d4394.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.146.78:54804->16.146.207.154:6443: read: connection reset by peer
Apr 20 18:50:29.447 - 999ms E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/8f9e8aa7-b3ba-4c20-a6ca-75239e0642e4 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-jqnp8fg1-d4394.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.146.78:54888->16.146.207.154:6443: read: connection reset by peer
Apr 20 18:50:30.447 - 1s    E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/68d54cb0-aa58-459e-8b48-2124001cb5b3 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-jqnp8fg1-d4394.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.146.78:50022->44.226.100.86:6443: read: connection reset by peer
Apr 20 18:50:31.447 - 999ms E backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/b20adddf-ccd2-480e-acaf-32e5b45666f2 backend-disruption-name/kube-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://api.ci-op-jqnp8fg1-d4394.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default": read tcp 172.24.146.78:50056->44.226.100.86:6443: read: connection reset by peer

... 5 lines not shown

periodic-ci-openshift-release-main-ci-4.12-e2e-gcp-sdn-upgrade (all) - 3 runs, 0% failed, 67% of runs match
#2046282818224394240junit17 minutes ago
Apr 20 19:52:16.060 E ns/openshift-e2e-loki pod/loki-promtail-j6x5p node/ci-op-tc3w4l1w-0000e-xx597-master-0 uid/680a27aa-91d5-4d8e-afb1-b9a7db722365 container/promtail reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Apr 20 19:52:26.627 - 1s    E disruption/cache-openshift-api connection/new reason/DisruptionBegan disruption/cache-openshift-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-tc3w4l1w-0000e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams?resourceVersion=0": read tcp 10.129.112.140:43624->34.56.179.128:6443: read: connection reset by peer
Apr 20 19:52:39.866 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-677d99977f-lv9kr node/ci-op-tc3w4l1w-0000e-xx597-master-2 uid/5aff59c5-5019-40dd-b63c-31079967ca0d container/kube-storage-version-migrator-operator reason/ContainerExit code/1 cause/Error atus":"False","type":"Progressing"},{"lastTransitionTime":"2026-04-20T19:44:43Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2026-04-20T18:29:27Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0420 19:44:43.609303       1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"a1fa345a-486c-42ee-833d-99f357231b46", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")\nI0420 19:52:36.973391       1 cmd.go:100] Received SIGTERM or SIGINT signal, shutting down controller.\nI0420 19:52:36.973921       1 genericapiserver.go:593] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped"\nI0420 19:52:36.973968       1 genericapiserver.go:493] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0420 19:52:36.974238       1 base_controller.go:167] Shutting down StatusSyncer_kube-storage-version-migrator ...\nI0420 19:52:36.974327       1 base_controller.go:145] All StatusSyncer_kube-storage-version-migrator post start hooks have been terminated\nI0420 19:52:36.974374       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0420 19:52:36.974428       1 base_controller.go:167] Shutting down KubeStorageVersionMigrator ...\nI0420 19:52:36.974492       1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0420 19:52:36.974531       1 base_controller.go:167] Shutting down StaticConditionsController ...\nI0420 19:52:36.974569       1 base_controller.go:167] Shutting down KubeStorageVersionMigratorStaticResources ...\nW0420 19:52:36.974699       1 builder.go:109] graceful termination failed, controllers failed with error: stopped\n
#2046282818224394240junit17 minutes ago
# [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path [Skipped:NoOptionalCapabilities] [Suite:openshift/conformance/parallel] [Suite:k8s]
fail [k8s.io/kubernetes@v1.25.0/test/e2e/storage/utils/local.go:239]: Apr 20 20:24:43.772: error sending request: Post "https://api.ci-op-tc3w4l1w-0000e.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/e2e-provisioning-907/pods/hostexec-ci-op-tc3w4l1w-0000e-xx597-worker-f-h82wv-qrzjg/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+%2Ftmp%2Flocal-driver-37416f0c-6500-49a5-85fc-9c781b7c749b+%26%26+mount+--bind+%2Ftmp%2Flocal-driver-37416f0c-6500-49a5-85fc-9c781b7c749b+%2Ftmp%2Flocal-driver-37416f0c-6500-49a5-85fc-9c781b7c749b&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true": read tcp 10.129.112.140:40956->34.56.179.128:6443: read: connection reset by peer
Ginkgo exit error 1: exit with code 1
#2046019783920455680junit18 hours ago
Apr 20 01:02:58.028 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-d8bdd5cc8-rd7mc node/ci-op-f4zgiqhn-0000e-6lr77-master-1 uid/5a549d82-ce01-4c47-9d2b-2959817e9f91 container/kube-apiserver-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Apr 20 01:08:11.232 - 999ms E disruption/cache-openshift-api connection/new reason/DisruptionBegan disruption/cache-openshift-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-f4zgiqhn-0000e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams?resourceVersion=0": read tcp 10.131.222.98:39208->34.61.42.135:6443: read: connection reset by peer
Apr 20 01:12:09.670 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-74cb77b868-mhbns node/ci-op-f4zgiqhn-0000e-6lr77-master-1 uid/835ca32b-7223-4432-a643-744426072154 container/kube-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
periodic-ci-openshift-release-main-nightly-4.20-e2e-aws-ovn-etcd-scaling (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046277060187394048junit18 minutes ago
2026-04-20T20:32:53Z node/ip-10-0-127-62.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w4jjfd13-018ee.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-62.us-west-1.compute.internal?timeout=10s - read tcp 10.0.127.62:45324->10.0.116.130:6443: read: connection reset by peer
2026-04-20T20:33:06Z node/ip-10-0-58-35.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w4jjfd13-018ee.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-35.us-west-1.compute.internal?timeout=10s - read tcp 10.0.58.35:49336->10.0.26.188:6443: read: connection reset by peer
2026-04-20T20:33:06Z node/ip-10-0-71-139.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w4jjfd13-018ee.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-139.us-west-1.compute.internal?timeout=10s - read tcp 10.0.71.139:47216->10.0.116.130:6443: read: connection reset by peer
2026-04-20T20:36:22Z node/ip-10-0-80-149.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w4jjfd13-018ee.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-80-149.us-west-1.compute.internal?timeout=10s - read tcp 10.0.80.149:56332->10.0.116.130:6443: read: connection reset by peer
2026-04-20T20:36:30Z node/ip-10-0-85-131.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w4jjfd13-018ee.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-85-131.us-west-1.compute.internal?timeout=10s - read tcp 10.0.85.131:41214->10.0.116.130:6443: read: connection reset by peer
2026-04-20T20:48:53Z node/ip-10-0-127-62.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w4jjfd13-018ee.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-62.us-west-1.compute.internal?timeout=10s - context deadline exceeded
periodic-ci-openshift-release-main-nightly-4.20-e2e-aws-ovn-proxy (all) - 8 runs, 0% failed, 100% of runs match
#2046288860324827136junit19 minutes ago
2026-04-20T19:27:02Z node/ip-10-0-70-75.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zkk2q53w-9532e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-70-75.ec2.internal?timeout=10s - read tcp 10.0.70.75:59510->10.0.62.52:6443: read: connection reset by peer
#2046165558583365632junit9 hours ago
2026-04-20T10:59:06Z node/ip-10-0-58-59.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-yfvfhx29-9532e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-59.ec2.internal?timeout=10s - read tcp 10.0.58.59:47542->10.0.78.124:6443: read: connection reset by peer
2026-04-20T10:59:14Z node/ip-10-0-76-199.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-yfvfhx29-9532e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-76-199.ec2.internal?timeout=10s - read tcp 10.0.76.199:47656->10.0.58.100:6443: read: connection reset by peer
2026-04-20T10:59:19Z node/ip-10-0-52-241.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-yfvfhx29-9532e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-241.ec2.internal?timeout=10s - read tcp 10.0.52.241:41842->10.0.58.100:6443: read: connection reset by peer
#2046033506429046784junit17 hours ago
2026-04-20T01:59:51Z node/ip-10-0-74-155.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-hqbir8tz-9532e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-74-155.us-west-2.compute.internal?timeout=10s - read tcp 10.0.74.155:33358->10.0.52.23:6443: read: connection reset by peer
#2045949793343115264junit23 hours ago
2026-04-19T20:31:35Z node/ip-10-0-51-71.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-tidvsf7l-9532e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-71.ec2.internal?timeout=10s - read tcp 10.0.51.71:33322->10.0.49.104:6443: read: connection reset by peer
2026-04-19T20:31:38Z node/ip-10-0-56-163.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-tidvsf7l-9532e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-56-163.ec2.internal?timeout=10s - read tcp 10.0.56.163:55040->10.0.78.4:6443: read: connection reset by peer
2026-04-19T20:31:47Z node/ip-10-0-64-181.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-tidvsf7l-9532e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-64-181.ec2.internal?timeout=10s - read tcp 10.0.64.181:45472->10.0.49.104:6443: read: connection reset by peer
#2045811558554013696junit32 hours ago
2026-04-19T11:17:12Z node/ip-10-0-70-47.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-hy0gtnbq-9532e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-70-47.us-west-2.compute.internal?timeout=10s - read tcp 10.0.70.47:36592->10.0.52.136:6443: read: connection reset by peer
2026-04-19T11:17:20Z node/ip-10-0-57-89.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-hy0gtnbq-9532e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-57-89.us-west-2.compute.internal?timeout=10s - read tcp 10.0.57.89:57728->10.0.68.109:6443: read: connection reset by peer
2026-04-19T11:17:20Z node/ip-10-0-60-115.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-hy0gtnbq-9532e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-115.us-west-2.compute.internal?timeout=10s - read tcp 10.0.60.115:39318->10.0.68.109:6443: read: connection reset by peer
#2045679529942323200junit41 hours ago
2026-04-19T02:33:08Z node/ip-10-0-75-161.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mt1bfn64-9532e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-75-161.us-west-2.compute.internal?timeout=10s - read tcp 10.0.75.161:53964->10.0.58.95:6443: read: connection reset by peer
2026-04-19T02:33:10Z node/ip-10-0-55-132.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mt1bfn64-9532e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-55-132.us-west-2.compute.internal?timeout=10s - read tcp 10.0.55.132:51246->10.0.67.214:6443: read: connection reset by peer
#2045654072274456576junit43 hours ago
2026-04-19T00:52:50Z node/ip-10-0-55-8.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-9c59wks4-9532e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-55-8.ec2.internal?timeout=10s - read tcp 10.0.55.8:39744->10.0.68.62:6443: read: connection reset by peer
2026-04-19T00:52:55Z node/ip-10-0-58-102.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-9c59wks4-9532e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-102.ec2.internal?timeout=10s - read tcp 10.0.58.102:34000->10.0.68.62:6443: read: connection reset by peer
#2045600301955682304junit46 hours ago
2026-04-18T21:19:23Z node/ip-10-0-58-183.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jrwrjswh-9532e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-183.us-east-2.compute.internal?timeout=10s - read tcp 10.0.58.183:58150->10.0.73.33:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.15-upgrade-from-stable-4.13-e2e-aws-ovn-upgrade-paused (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046276050475814912junit19 minutes ago
2026-04-20T18:30:50Z node/ip-10-0-178-227.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-7stqznzs-1f1aa.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-178-227.us-west-2.compute.internal?timeout=10s - read tcp 10.0.178.227:42668->10.0.130.133:6443: read: connection reset by peer
periodic-ci-openshift-multiarch-main-nightly-4.20-ocp-e2e-aws-ovn-multi-x-ax (all) - 7 runs, 29% failed, 200% of failures match = 57% impact
#2046294667732979712junit24 minutes ago
2026-04-20T19:18:13Z node/ip-10-0-92-89.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-il1m1vxy-92804.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-92-89.ec2.internal?timeout=10s - read tcp 10.0.92.89:54872->10.0.41.72:6443: read: connection reset by peer
#2046041657178066944junit17 hours ago
2026-04-20T02:36:01Z node/ip-10-0-111-18.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-2s64jr3g-92804.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-111-18.us-west-2.compute.internal?timeout=10s - read tcp 10.0.111.18:54436->10.0.39.121:6443: read: connection reset by peer
2026-04-20T02:36:05Z node/ip-10-0-23-70.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-2s64jr3g-92804.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-70.us-west-2.compute.internal?timeout=10s - read tcp 10.0.23.70:54112->10.0.96.103:6443: read: connection reset by peer
2026-04-20T02:36:10Z node/ip-10-0-70-161.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-2s64jr3g-92804.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-70-161.us-west-2.compute.internal?timeout=10s - read tcp 10.0.70.161:46410->10.0.96.103:6443: read: connection reset by peer
#2045956842328166400junit22 hours ago
2026-04-19T20:58:02Z node/ip-10-0-48-122.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-lh0cl9v7-92804.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-48-122.us-west-1.compute.internal?timeout=10s - read tcp 10.0.48.122:44470->10.0.36.20:6443: read: connection reset by peer
2026-04-19T20:58:06Z node/ip-10-0-5-222.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-lh0cl9v7-92804.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-222.us-west-1.compute.internal?timeout=10s - read tcp 10.0.5.222:35164->10.0.36.20:6443: read: connection reset by peer
2026-04-19T21:02:17Z node/ip-10-0-48-122.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-lh0cl9v7-92804.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-48-122.us-west-1.compute.internal?timeout=10s - read tcp 10.0.48.122:49728->10.0.97.46:6443: read: connection reset by peer
#2045605724062486528junit46 hours ago
2026-04-18T21:49:12Z node/ip-10-0-5-226.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ltkzlmxt-92804.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-226.ec2.internal?timeout=10s - read tcp 10.0.5.226:60288->10.0.109.215:6443: read: connection reset by peer
2026-04-18T21:49:14Z node/ip-10-0-46-107.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ltkzlmxt-92804.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-46-107.ec2.internal?timeout=10s - read tcp 10.0.46.107:32814->10.0.109.215:6443: read: connection reset by peer
2026-04-18T21:49:26Z node/ip-10-0-24-80.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ltkzlmxt-92804.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-80.ec2.internal?timeout=10s - read tcp 10.0.24.80:49224->10.0.3.12:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.18-e2e-aws-ovn-upgrade-out-of-change (all) - 1 runs, 0% failed, 100% of runs match
#2046275471041105920junit22 minutes ago
2026-04-20T18:32:23Z node/ip-10-0-103-52.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ttwrxg9l-50552.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-52.us-east-2.compute.internal?timeout=10s - read tcp 10.0.103.52:52616->10.0.29.224:6443: read: connection reset by peer
2026-04-20T18:55:40Z node/ip-10-0-62-87.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ttwrxg9l-50552.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-87.us-east-2.compute.internal?timeout=10s - read tcp 10.0.62.87:50098->10.0.29.224:6443: read: connection reset by peer
2026-04-20T19:46:03Z node/ip-10-0-122-67.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ttwrxg9l-50552.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-122-67.us-east-2.compute.internal?timeout=10s - read tcp 10.0.122.67:59154->10.0.29.224:6443: read: connection reset by peer
2026-04-20T19:46:08Z node/ip-10-0-70-165.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ttwrxg9l-50552.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-70-165.us-east-2.compute.internal?timeout=10s - read tcp 10.0.70.165:58058->10.0.69.196:6443: read: connection reset by peer
2026-04-20T19:46:10Z node/ip-10-0-62-87.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ttwrxg9l-50552.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-87.us-east-2.compute.internal?timeout=10s - read tcp 10.0.62.87:35078->10.0.69.196:6443: read: connection reset by peer
#2046275471041105920junit22 minutes ago
2026-04-20T18:32:23Z node/ip-10-0-103-52.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ttwrxg9l-50552.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-52.us-east-2.compute.internal?timeout=10s - read tcp 10.0.103.52:52616->10.0.29.224:6443: read: connection reset by peer
2026-04-20T18:55:40Z node/ip-10-0-62-87.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ttwrxg9l-50552.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-87.us-east-2.compute.internal?timeout=10s - read tcp 10.0.62.87:50098->10.0.29.224:6443: read: connection reset by peer
2026-04-20T19:46:03Z node/ip-10-0-122-67.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ttwrxg9l-50552.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-122-67.us-east-2.compute.internal?timeout=10s - read tcp 10.0.122.67:59154->10.0.29.224:6443: read: connection reset by peer
2026-04-20T19:46:08Z node/ip-10-0-70-165.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ttwrxg9l-50552.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-70-165.us-east-2.compute.internal?timeout=10s - read tcp 10.0.70.165:58058->10.0.69.196:6443: read: connection reset by peer
2026-04-20T19:46:10Z node/ip-10-0-62-87.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ttwrxg9l-50552.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-87.us-east-2.compute.internal?timeout=10s - read tcp 10.0.62.87:35078->10.0.69.196:6443: read: connection reset by peer
periodic-ci-openshift-multiarch-main-nightly-4.20-ocp-e2e-aws-ovn-upgrade-multi-x-ax (all) - 7 runs, 0% failed, 100% of runs match
#2046294164588466176junit25 minutes ago
Apr 20 19:45:12.152 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter: failed to apply / update (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter: Patch "https://api-int.ci-op-2bjl3qxb-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/iptables-alerter?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.55.170:59148->10.0.74.171:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 20 19:45:12.152 - 27s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter: failed to apply / update (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter: Patch "https://api-int.ci-op-2bjl3qxb-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/iptables-alerter?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.55.170:59148->10.0.74.171:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 20 19:45:40.139 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2046294164588466176junit25 minutes ago
2026-04-20T19:11:59Z node/ip-10-0-60-246.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-2bjl3qxb-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-246.ec2.internal?timeout=10s - read tcp 10.0.60.246:43350->10.0.51.179:6443: read: connection reset by peer
2026-04-20T19:41:14Z node/ip-10-0-16-12.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-2bjl3qxb-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-12.ec2.internal?timeout=10s - read tcp 10.0.16.12:41302->10.0.74.171:6443: read: connection reset by peer
2026-04-20T19:41:22Z node/ip-10-0-45-188.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-2bjl3qxb-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-45-188.ec2.internal?timeout=10s - read tcp 10.0.45.188:47174->10.0.74.171:6443: read: connection reset by peer
2026-04-20T19:41:39Z node/ip-10-0-120-254.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-2bjl3qxb-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-254.ec2.internal?timeout=10s - read tcp 10.0.120.254:58442->10.0.51.179:6443: read: connection reset by peer
2026-04-20T19:45:19Z node/ip-10-0-60-246.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-2bjl3qxb-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-246.ec2.internal?timeout=10s - read tcp 10.0.60.246:36632->10.0.74.171:6443: read: connection reset by peer
2026-04-20T19:45:23Z node/ip-10-0-27-228.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-2bjl3qxb-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-27-228.ec2.internal?timeout=10s - read tcp 10.0.27.228:33598->10.0.51.179:6443: read: connection reset by peer
2026-04-20T20:29:47Z node/ip-10-0-67-57.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-2bjl3qxb-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-67-57.ec2.internal?timeout=10s - read tcp 10.0.67.57:59162->10.0.74.171:6443: read: connection reset by peer
2026-04-20T20:37:47Z node/ip-10-0-60-246.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-2bjl3qxb-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-246.ec2.internal?timeout=10s - read tcp 10.0.60.246:56818->10.0.51.179:6443: read: connection reset by peer
#2046174076187185152junit7 hours ago
2026-04-20T11:24:29Z node/ip-10-0-78-64.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-844j6hyb-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-78-64.ec2.internal?timeout=10s - read tcp 10.0.78.64:41232->10.0.123.172:6443: read: connection reset by peer
2026-04-20T11:24:30Z node/ip-10-0-71-170.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-844j6hyb-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-170.ec2.internal?timeout=10s - read tcp 10.0.71.170:38076->10.0.123.172:6443: read: connection reset by peer
2026-04-20T11:24:30Z node/ip-10-0-58-78.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-844j6hyb-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-78.ec2.internal?timeout=10s - read tcp 10.0.58.78:37110->10.0.4.116:6443: read: connection reset by peer
2026-04-20T11:52:02Z node/ip-10-0-70-149.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-844j6hyb-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-70-149.ec2.internal?timeout=10s - read tcp 10.0.70.149:34984->10.0.123.172:6443: read: connection reset by peer
2026-04-20T12:00:04Z node/ip-10-0-78-64.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-844j6hyb-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-78-64.ec2.internal?timeout=10s - read tcp 10.0.78.64:38426->10.0.123.172:6443: read: connection reset by peer
2026-04-20T12:00:23Z node/ip-10-0-71-170.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-844j6hyb-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-170.ec2.internal?timeout=10s - read tcp 10.0.71.170:42252->10.0.123.172:6443: read: connection reset by peer
2026-04-20T12:42:46Z node/ip-10-0-71-170.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-844j6hyb-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-170.ec2.internal?timeout=10s - read tcp 10.0.71.170:42488->10.0.123.172:6443: read: connection reset by peer
2026-04-20T12:49:23Z node/ip-10-0-71-170.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-844j6hyb-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-170.ec2.internal?timeout=10s - read tcp 10.0.71.170:37170->10.0.4.116:6443: read: connection reset by peer
#2046041673988837376junit17 hours ago
2026-04-20T03:17:43Z node/ip-10-0-123-214.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xm0q64n7-3ab26.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-123-214.us-east-2.compute.internal?timeout=10s - read tcp 10.0.123.214:48570->10.0.9.40:6443: read: connection reset by peer
2026-04-20T03:17:46Z node/ip-10-0-7-204.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xm0q64n7-3ab26.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-7-204.us-east-2.compute.internal?timeout=10s - read tcp 10.0.7.204:54042->10.0.93.144:6443: read: connection reset by peer
2026-04-20T03:17:50Z node/ip-10-0-62-242.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xm0q64n7-3ab26.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-242.us-east-2.compute.internal?timeout=10s - read tcp 10.0.62.242:58602->10.0.9.40:6443: read: connection reset by peer
2026-04-20T03:32:55Z node/ip-10-0-62-242.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xm0q64n7-3ab26.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-242.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T03:36:39Z node/ip-10-0-123-214.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xm0q64n7-3ab26.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-123-214.us-east-2.compute.internal?timeout=10s - read tcp 10.0.123.214:33306->10.0.9.40:6443: read: connection reset by peer
2026-04-20T03:40:11Z node/ip-10-0-88-7.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xm0q64n7-3ab26.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-88-7.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T03:47:02Z node/ip-10-0-0-171.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xm0q64n7-3ab26.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-171.us-east-2.compute.internal?timeout=10s - read tcp 10.0.0.171:47892->10.0.9.40:6443: read: connection reset by peer
2026-04-20T03:47:02Z node/ip-10-0-62-242.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xm0q64n7-3ab26.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-242.us-east-2.compute.internal?timeout=10s - read tcp 10.0.62.242:56906->10.0.9.40:6443: read: connection reset by peer
#2045956879955267584junit23 hours ago
Apr 19 21:37:25.107 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
Apr 19 21:41:24.866 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node: failed to apply / update (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node: Patch "https://api-int.ci-op-lgb4nbil-3ab26.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-ovn-kubernetes/services/ovn-kubernetes-node?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.35.126:58514->10.0.89.230:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 19 21:41:24.866 - 27s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node: failed to apply / update (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node: Patch "https://api-int.ci-op-lgb4nbil-3ab26.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-ovn-kubernetes/services/ovn-kubernetes-node?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.35.126:58514->10.0.89.230:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 19 21:41:52.666 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2045956879955267584junit23 hours ago
2026-04-19T21:16:15Z node/ip-10-0-35-126.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-lgb4nbil-3ab26.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-126.us-east-2.compute.internal?timeout=10s - read tcp 10.0.35.126:41800->10.0.22.182:6443: read: connection reset by peer
2026-04-19T21:16:25Z node/ip-10-0-35-126.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-lgb4nbil-3ab26.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-126.us-east-2.compute.internal?timeout=10s - read tcp 10.0.35.126:43856->10.0.89.230:6443: read: connection reset by peer
2026-04-19T21:16:29Z node/ip-10-0-48-205.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-lgb4nbil-3ab26.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-48-205.us-east-2.compute.internal?timeout=10s - read tcp 10.0.48.205:53394->10.0.89.230:6443: read: connection reset by peer
2026-04-19T21:20:48Z node/ip-10-0-91-115.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-lgb4nbil-3ab26.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-91-115.us-east-2.compute.internal?timeout=10s - read tcp 10.0.91.115:48472->10.0.89.230:6443: read: connection reset by peer
2026-04-19T21:24:32Z node/ip-10-0-55-104.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-lgb4nbil-3ab26.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-55-104.us-east-2.compute.internal?timeout=10s - read tcp 10.0.55.104:59418->10.0.22.182:6443: read: connection reset by peer
2026-04-19T21:33:33Z node/ip-10-0-89-44.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-lgb4nbil-3ab26.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-44.us-east-2.compute.internal?timeout=10s - read tcp 10.0.89.44:47978->10.0.22.182:6443: read: connection reset by peer
2026-04-19T21:33:34Z node/ip-10-0-69-210.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-lgb4nbil-3ab26.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-69-210.us-east-2.compute.internal?timeout=10s - read tcp 10.0.69.210:56528->10.0.22.182:6443: read: connection reset by peer
2026-04-19T21:33:39Z node/ip-10-0-112-115.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-lgb4nbil-3ab26.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-112-115.us-east-2.compute.internal?timeout=10s - read tcp 10.0.112.115:59960->10.0.22.182:6443: read: connection reset by peer
2026-04-19T21:37:03Z node/ip-10-0-112-115.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-lgb4nbil-3ab26.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-112-115.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045822618870747136junit32 hours ago
2026-04-19T12:11:15Z node/ip-10-0-43-114.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mmxs7259-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-43-114.us-west-2.compute.internal?timeout=10s - read tcp 10.0.43.114:55324->10.0.19.14:6443: read: connection reset by peer
2026-04-19T12:45:19Z node/ip-10-0-100-28.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mmxs7259-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-28.us-west-2.compute.internal?timeout=10s - read tcp 10.0.100.28:37408->10.0.19.14:6443: read: connection reset by peer
2026-04-19T12:49:14Z node/ip-10-0-100-28.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mmxs7259-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-28.us-west-2.compute.internal?timeout=10s - read tcp 10.0.100.28:44450->10.0.19.14:6443: read: connection reset by peer
2026-04-19T12:49:17Z node/ip-10-0-47-90.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mmxs7259-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-47-90.us-west-2.compute.internal?timeout=10s - read tcp 10.0.47.90:56662->10.0.100.3:6443: read: connection reset by peer
2026-04-19T13:00:09Z node/ip-10-0-100-28.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mmxs7259-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-28.us-west-2.compute.internal?timeout=10s - read tcp 10.0.100.28:33984->10.0.19.14:6443: read: connection reset by peer
2026-04-19T13:07:59Z node/ip-10-0-22-172.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mmxs7259-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-172.us-west-2.compute.internal?timeout=10s - read tcp 10.0.22.172:44442->10.0.100.3:6443: read: connection reset by peer
2026-04-19T13:15:37Z node/ip-10-0-22-172.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mmxs7259-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-172.us-west-2.compute.internal?timeout=10s - read tcp 10.0.22.172:54922->10.0.19.14:6443: read: connection reset by peer
2026-04-19T13:15:39Z node/ip-10-0-47-90.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mmxs7259-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-47-90.us-west-2.compute.internal?timeout=10s - read tcp 10.0.47.90:33684->10.0.19.14:6443: read: connection reset by peer
#2045686171060670464junit41 hours ago
2026-04-19T03:23:56Z node/ip-10-0-100-228.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jri2wqf5-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-228.us-east-2.compute.internal?timeout=10s - read tcp 10.0.100.228:58328->10.0.70.210:6443: read: connection reset by peer
2026-04-19T03:27:44Z node/ip-10-0-34-61.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jri2wqf5-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-34-61.us-east-2.compute.internal?timeout=10s - read tcp 10.0.34.61:47198->10.0.32.112:6443: read: connection reset by peer
2026-04-19T03:27:51Z node/ip-10-0-100-228.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jri2wqf5-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-228.us-east-2.compute.internal?timeout=10s - read tcp 10.0.100.228:52386->10.0.70.210:6443: read: connection reset by peer
2026-04-19T03:27:54Z node/ip-10-0-69-179.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jri2wqf5-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-69-179.us-east-2.compute.internal?timeout=10s - read tcp 10.0.69.179:59778->10.0.70.210:6443: read: connection reset by peer
2026-04-19T03:27:55Z node/ip-10-0-61-243.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jri2wqf5-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-243.us-east-2.compute.internal?timeout=10s - read tcp 10.0.61.243:38270->10.0.32.112:6443: read: connection reset by peer
2026-04-19T03:45:20Z node/ip-10-0-60-58.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jri2wqf5-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-58.us-east-2.compute.internal?timeout=10s - read tcp 10.0.60.58:46998->10.0.70.210:6443: read: connection reset by peer
2026-04-19T03:45:40Z node/ip-10-0-60-58.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jri2wqf5-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-58.us-east-2.compute.internal?timeout=10s - read tcp 10.0.60.58:38644->10.0.32.112:6443: read: connection reset by peer
#2045605686305361920junit46 hours ago
2026-04-18T22:16:18Z node/ip-10-0-45-28.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t8pfhb2q-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-45-28.us-west-1.compute.internal?timeout=10s - read tcp 10.0.45.28:43618->10.0.82.79:6443: read: connection reset by peer
2026-04-18T22:20:19Z node/ip-10-0-44-147.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t8pfhb2q-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-44-147.us-west-1.compute.internal?timeout=10s - read tcp 10.0.44.147:53174->10.0.55.163:6443: read: connection reset by peer
#2045605686305361920junit46 hours ago
Apr 18 22:24:39.220 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts: Patch "https://api-int.ci-op-t8pfhb2q-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/multus-whereabouts?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.45.28:41718->10.0.55.163:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 18 22:24:39.220 - 10s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts: Patch "https://api-int.ci-op-t8pfhb2q-3ab26.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/multus-whereabouts?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.45.28:41718->10.0.55.163:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 18 22:24:49.554 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
periodic-ci-openshift-release-main-ci-4.15-e2e-aws-sdn-upgrade-rollback (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046275073416892416junit32 minutes ago
2026-04-20T19:23:03Z node/ip-10-0-82-112.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-jmk71jdv-b895d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-82-112.ec2.internal?timeout=10s - read tcp 10.0.82.112:35020->10.0.58.83:6443: read: connection reset by peer
2026-04-20T19:43:33Z node/ip-10-0-13-93.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-jmk71jdv-b895d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-93.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046275073416892416junit32 minutes ago
2026-04-20T19:23:03Z node/ip-10-0-82-112.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-jmk71jdv-b895d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-82-112.ec2.internal?timeout=10s - read tcp 10.0.82.112:35020->10.0.58.83:6443: read: connection reset by peer
2026-04-20T19:43:33Z node/ip-10-0-13-93.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-jmk71jdv-b895d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-93.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-ci-4.15-e2e-aws-sdn-upgrade-out-of-change (all) - 1 runs, 0% failed, 100% of runs match
#2046275073123291136junit29 minutes ago
2026-04-20T18:29:41Z node/ip-10-0-109-248.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3sh59bnd-c2f85.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-248.us-west-1.compute.internal?timeout=10s - read tcp 10.0.109.248:51144->10.0.32.171:6443: read: connection reset by peer
2026-04-20T18:56:33Z node/ip-10-0-27-147.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3sh59bnd-c2f85.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-27-147.us-west-1.compute.internal?timeout=10s - read tcp 10.0.27.147:36048->10.0.32.171:6443: read: connection reset by peer
2026-04-20T19:00:26Z node/ip-10-0-20-172.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3sh59bnd-c2f85.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-20-172.us-west-1.compute.internal?timeout=10s - read tcp 10.0.20.172:56628->10.0.99.20:6443: read: connection reset by peer
2026-04-20T19:00:32Z node/ip-10-0-109-248.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3sh59bnd-c2f85.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-248.us-west-1.compute.internal?timeout=10s - read tcp 10.0.109.248:60758->10.0.32.171:6443: read: connection reset by peer
#2046275073123291136junit29 minutes ago
2026-04-20T18:29:41Z node/ip-10-0-109-248.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3sh59bnd-c2f85.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-248.us-west-1.compute.internal?timeout=10s - read tcp 10.0.109.248:51144->10.0.32.171:6443: read: connection reset by peer
2026-04-20T18:56:33Z node/ip-10-0-27-147.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3sh59bnd-c2f85.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-27-147.us-west-1.compute.internal?timeout=10s - read tcp 10.0.27.147:36048->10.0.32.171:6443: read: connection reset by peer
2026-04-20T19:00:26Z node/ip-10-0-20-172.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3sh59bnd-c2f85.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-20-172.us-west-1.compute.internal?timeout=10s - read tcp 10.0.20.172:56628->10.0.99.20:6443: read: connection reset by peer
2026-04-20T19:00:32Z node/ip-10-0-109-248.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3sh59bnd-c2f85.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-248.us-west-1.compute.internal?timeout=10s - read tcp 10.0.109.248:60758->10.0.32.171:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.16-upgrade-from-stable-4.14-e2e-aws-ovn-upgrade-paused (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046275799824207872junit34 minutes ago
Apr 20 19:42:04.274 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni: Patch "https://api-int.ci-op-wyptdfbc-fa880.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-multus/roles/whereabouts-cni?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.3.83:34792->10.0.97.86:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 20 19:42:04.274 - 47s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni: Patch "https://api-int.ci-op-wyptdfbc-fa880.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-multus/roles/whereabouts-cni?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.3.83:34792->10.0.97.86:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 20 19:42:52.272 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2046275799824207872junit34 minutes ago
2026-04-20T19:41:27Z node/ip-10-0-3-83.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wyptdfbc-fa880.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-3-83.ec2.internal?timeout=10s - read tcp 10.0.3.83:48638->10.0.58.22:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.16-e2e-aws-sdn-upgrade-rollback (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046275212667785216junit36 minutes ago
2026-04-20T18:40:58Z node/ip-10-0-98-184.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-rb331s0t-b8ec4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-98-184.ec2.internal?timeout=10s - read tcp 10.0.98.184:42706->10.0.64.180:6443: read: connection reset by peer
2026-04-20T18:41:09Z node/ip-10-0-98-184.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-rb331s0t-b8ec4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-98-184.ec2.internal?timeout=10s - read tcp 10.0.98.184:42400->10.0.64.180:6443: read: connection reset by peer
2026-04-20T19:07:09Z node/ip-10-0-123-15.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-rb331s0t-b8ec4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-123-15.ec2.internal?timeout=10s - read tcp 10.0.123.15:46978->10.0.2.195:6443: read: connection reset by peer
2026-04-20T19:10:33Z node/ip-10-0-14-244.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-rb331s0t-b8ec4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-14-244.ec2.internal?timeout=10s - read tcp 10.0.14.244:33154->10.0.64.180:6443: read: connection reset by peer
2026-04-20T19:10:38Z node/ip-10-0-90-6.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-rb331s0t-b8ec4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-6.ec2.internal?timeout=10s - read tcp 10.0.90.6:45548->10.0.64.180:6443: read: connection reset by peer
2026-04-20T19:10:43Z node/ip-10-0-14-244.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-rb331s0t-b8ec4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-14-244.ec2.internal?timeout=10s - read tcp 10.0.14.244:49114->10.0.2.195:6443: read: connection reset by peer
2026-04-20T19:12:06Z node/ip-10-0-90-6.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-rb331s0t-b8ec4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-6.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-ci-4.18-upgrade-from-stable-4.17-e2e-aws-ovn-uwm (all) - 1 runs, 0% failed, 100% of runs match
#2046275590511661056junit36 minutes ago
Apr 20 20:24:30.718 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited: Patch "https://api-int.ci-op-zf7lf2r3-af2ec.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/openshift-ovn-kubernetes-node-limited?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.101.133:43936->10.0.88.237:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 20 20:24:30.718 - 27s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited: Patch "https://api-int.ci-op-zf7lf2r3-af2ec.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/openshift-ovn-kubernetes-node-limited?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.101.133:43936->10.0.88.237:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 20 20:24:57.922 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2046275590511661056junit36 minutes ago
2026-04-20T19:27:17Z node/ip-10-0-65-232.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zf7lf2r3-af2ec.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-65-232.ec2.internal?timeout=10s - read tcp 10.0.65.232:59148->10.0.88.237:6443: read: connection reset by peer
2026-04-20T20:24:20Z node/ip-10-0-69-101.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zf7lf2r3-af2ec.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-69-101.ec2.internal?timeout=10s - read tcp 10.0.69.101:52450->10.0.88.237:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.18-e2e-aws-ovn (all) - 1 runs, 0% failed, 100% of runs match
#2046275460056223744junit40 minutes ago
2026-04-20T18:39:19Z node/ip-10-0-22-38.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-kmrq991y-9d289.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-38.ec2.internal?timeout=10s - read tcp 10.0.22.38:41622->10.0.80.225:6443: read: connection reset by peer
2026-04-20T18:39:20Z node/ip-10-0-43-79.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-kmrq991y-9d289.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-43-79.ec2.internal?timeout=10s - read tcp 10.0.43.79:52516->10.0.39.85:6443: read: connection reset by peer
2026-04-20T18:39:31Z node/ip-10-0-43-79.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-kmrq991y-9d289.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-43-79.ec2.internal?timeout=10s - read tcp 10.0.43.79:52380->10.0.39.85:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.16-e2e-aws-upgrade-ovn-single-node (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046275215184367616junit40 minutes ago
E0420 18:32:53.939093       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-cr7q5khz-6e4db.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": net/http: TLS handshake timeout - error from a previous attempt: unexpected EOF
E0420 18:33:26.278825       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-cr7q5khz-6e4db.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.93.115:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.70.97:35880->10.0.93.115:6443: read: connection reset by peer
I0420 18:34:04.619240       1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159
periodic-ci-openshift-release-main-ci-4.16-upgrade-from-stable-4.15-e2e-aws-sdn-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046275295723393024junit38 minutes ago
2026-04-20T18:33:00Z node/ip-10-0-5-126.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-78dfzy92-9caae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-126.us-east-2.compute.internal?timeout=10s - read tcp 10.0.5.126:43794->10.0.86.129:6443: read: connection reset by peer
2026-04-20T18:37:27Z node/ip-10-0-8-64.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-78dfzy92-9caae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-64.us-east-2.compute.internal?timeout=10s - read tcp 10.0.8.64:51852->10.0.86.129:6443: read: connection reset by peer
2026-04-20T19:13:59Z node/ip-10-0-8-64.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-78dfzy92-9caae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-64.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046275295723393024junit38 minutes ago
2026-04-20T19:14:37Z node/ip-10-0-8-64.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-78dfzy92-9caae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-64.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T19:14:38Z node/ip-10-0-8-64.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-78dfzy92-9caae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-64.us-east-2.compute.internal?timeout=10s - read tcp 10.0.8.64:43864->10.0.6.218:6443: read: connection reset by peer
2026-04-20T19:14:43Z node/ip-10-0-103-106.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-78dfzy92-9caae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-106.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046275295723393024junit38 minutes ago
2026-04-20T19:15:03Z node/ip-10-0-103-106.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-78dfzy92-9caae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-106.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T19:20:53Z node/ip-10-0-23-103.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-78dfzy92-9caae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-103.us-east-2.compute.internal?timeout=10s - read tcp 10.0.23.103:57356->10.0.86.129:6443: read: connection reset by peer
#2046275295723393024junit38 minutes ago
2026-04-20T18:33:00Z node/ip-10-0-5-126.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-78dfzy92-9caae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-126.us-east-2.compute.internal?timeout=10s - read tcp 10.0.5.126:43794->10.0.86.129:6443: read: connection reset by peer
2026-04-20T18:37:27Z node/ip-10-0-8-64.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-78dfzy92-9caae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-64.us-east-2.compute.internal?timeout=10s - read tcp 10.0.8.64:51852->10.0.86.129:6443: read: connection reset by peer
2026-04-20T19:13:59Z node/ip-10-0-8-64.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-78dfzy92-9caae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-64.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-nightly-4.19-e2e-aws-ovn-etcd-scaling (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046275853624545280junit46 minutes ago
2026-04-20T19:37:45Z node/ip-10-0-96-186.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6fjdgyi6-c0294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-96-186.us-east-2.compute.internal?timeout=10s - read tcp 10.0.96.186:46494->10.0.101.197:6443: read: connection reset by peer
2026-04-20T19:43:51Z node/ip-10-0-96-164.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6fjdgyi6-c0294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-96-164.us-east-2.compute.internal?timeout=10s - read tcp 10.0.96.164:38326->10.0.53.77:6443: read: connection reset by peer
2026-04-20T19:47:45Z node/ip-10-0-105-190.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6fjdgyi6-c0294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-105-190.us-east-2.compute.internal?timeout=10s - read tcp 10.0.105.190:51466->10.0.101.197:6443: read: connection reset by peer
2026-04-20T19:47:49Z node/ip-10-0-36-160.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6fjdgyi6-c0294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-160.us-east-2.compute.internal?timeout=10s - read tcp 10.0.36.160:33766->10.0.101.197:6443: read: connection reset by peer
2026-04-20T20:07:11Z node/ip-10-0-91-37.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6fjdgyi6-c0294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-91-37.us-east-2.compute.internal?timeout=10s - read tcp 10.0.91.37:50764->10.0.53.77:6443: read: connection reset by peer
2026-04-20T20:07:18Z node/ip-10-0-105-190.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6fjdgyi6-c0294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-105-190.us-east-2.compute.internal?timeout=10s - read tcp 10.0.105.190:55502->10.0.53.77:6443: read: connection reset by peer
2026-04-20T20:07:39Z node/ip-10-0-96-164.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6fjdgyi6-c0294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-96-164.us-east-2.compute.internal?timeout=10s - read tcp 10.0.96.164:37292->10.0.53.77:6443: read: connection reset by peer
2026-04-20T20:13:27Z node/ip-10-0-96-164.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6fjdgyi6-c0294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-96-164.us-east-2.compute.internal?timeout=10s - read tcp 10.0.96.164:34306->10.0.101.197:6443: read: connection reset by peer
2026-04-20T20:17:10Z node/ip-10-0-105-190.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6fjdgyi6-c0294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-105-190.us-east-2.compute.internal?timeout=10s - read tcp 10.0.105.190:34466->10.0.101.197:6443: read: connection reset by peer
2026-04-20T20:17:13Z node/ip-10-0-96-186.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6fjdgyi6-c0294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-96-186.us-east-2.compute.internal?timeout=10s - read tcp 10.0.96.186:36346->10.0.101.197:6443: read: connection reset by peer
2026-04-20T20:17:21Z node/ip-10-0-96-164.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6fjdgyi6-c0294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-96-164.us-east-2.compute.internal?timeout=10s - read tcp 10.0.96.164:44594->10.0.101.197:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.15-e2e-aws-sdn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#2046275073089736704junit43 minutes ago
2026-04-20T18:38:42Z node/ip-10-0-0-5.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-vg1tb3r5-9215b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-5.us-west-2.compute.internal?timeout=10s - read tcp 10.0.0.5:40842->10.0.82.251:6443: read: connection reset by peer
2026-04-20T18:47:00Z node/ip-10-0-103-220.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-vg1tb3r5-9215b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-220.us-west-2.compute.internal?timeout=10s - read tcp 10.0.103.220:56076->10.0.82.251:6443: read: connection reset by peer
2026-04-20T19:22:09Z node/ip-10-0-26-232.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-vg1tb3r5-9215b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-232.us-west-2.compute.internal?timeout=10s - read tcp 10.0.26.232:60248->10.0.82.251:6443: read: connection reset by peer
2026-04-20T19:27:39Z node/ip-10-0-26-232.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-vg1tb3r5-9215b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-232.us-west-2.compute.internal?timeout=10s - read tcp 10.0.26.232:37048->10.0.9.59:6443: read: connection reset by peer
#2046275073089736704junit43 minutes ago
2026-04-20T18:38:42Z node/ip-10-0-0-5.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-vg1tb3r5-9215b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-5.us-west-2.compute.internal?timeout=10s - read tcp 10.0.0.5:40842->10.0.82.251:6443: read: connection reset by peer
2026-04-20T18:47:00Z node/ip-10-0-103-220.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-vg1tb3r5-9215b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-220.us-west-2.compute.internal?timeout=10s - read tcp 10.0.103.220:56076->10.0.82.251:6443: read: connection reset by peer
2026-04-20T19:22:09Z node/ip-10-0-26-232.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-vg1tb3r5-9215b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-232.us-west-2.compute.internal?timeout=10s - read tcp 10.0.26.232:60248->10.0.82.251:6443: read: connection reset by peer
2026-04-20T19:27:39Z node/ip-10-0-26-232.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-vg1tb3r5-9215b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-232.us-west-2.compute.internal?timeout=10s - read tcp 10.0.26.232:37048->10.0.9.59:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.16-upgrade-from-stable-4.15-e2e-aws-sdn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#2046276308152881152junit49 minutes ago
2026-04-20T18:24:10Z node/ip-10-0-73-12.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zx6j6if7-464a1.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-73-12.ec2.internal?timeout=10s - read tcp 10.0.73.12:43704->10.0.80.129:6443: read: connection reset by peer
#2046276308152881152junit49 minutes ago
2026-04-20T18:24:10Z node/ip-10-0-73-12.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zx6j6if7-464a1.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-73-12.ec2.internal?timeout=10s - read tcp 10.0.73.12:43704->10.0.80.129:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.16-upgrade-from-stable-4.15-e2e-aws-ovn-uwm (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046275292367949824junit50 minutes ago
I0420 19:54:15.212587       1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 3 items received
E0420 19:54:15.212623       1 leaderelection.go:308] Failed to release lock: Put "https://api-int.ci-op-y61twr5x-97559.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": read tcp 10.0.30.246:52546->10.0.29.63:6443: read: connection reset by peer
error running approver: leader election lost
#2046275292367949824junit50 minutes ago
2026-04-20T19:54:08Z node/ip-10-0-26-76.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-y61twr5x-97559.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-76.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T19:54:11Z node/ip-10-0-30-246.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-y61twr5x-97559.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-246.us-west-1.compute.internal?timeout=10s - read tcp 10.0.30.246:43908->10.0.29.63:6443: read: connection reset by peer
2026-04-20T19:54:12Z node/ip-10-0-69-11.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-y61twr5x-97559.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-69-11.us-west-1.compute.internal?timeout=10s - read tcp 10.0.69.11:49934->10.0.29.63:6443: read: connection reset by peer
2026-04-20T20:08:13Z node/ip-10-0-115-54.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-y61twr5x-97559.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-115-54.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-ci-4.16-upgrade-from-stable-4.15-e2e-aws-sdn-upgrade-workload (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046275300584591360junit51 minutes ago
Apr 20 19:11:26.661 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ConfigMap) openshift-cloud-network-config-controller/kube-cloud-config: failed to apply / update (/v1, Kind=ConfigMap) openshift-cloud-network-config-controller/kube-cloud-config: Patch "https://api-int.ci-op-i5rbp8nk-3ef32.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-cloud-network-config-controller/configmaps/kube-cloud-config?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.117.219:44800->10.0.72.212:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 20 19:11:26.661 - 47s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ConfigMap) openshift-cloud-network-config-controller/kube-cloud-config: failed to apply / update (/v1, Kind=ConfigMap) openshift-cloud-network-config-controller/kube-cloud-config: Patch "https://api-int.ci-op-i5rbp8nk-3ef32.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-cloud-network-config-controller/configmaps/kube-cloud-config?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.117.219:44800->10.0.72.212:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 20 19:12:14.616 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2046275300584591360junit51 minutes ago
2026-04-20T19:14:59Z node/ip-10-0-94-73.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i5rbp8nk-3ef32.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-94-73.us-west-2.compute.internal?timeout=10s - read tcp 10.0.94.73:38336->10.0.49.46:6443: read: connection reset by peer
2026-04-20T19:18:55Z node/ip-10-0-117-219.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i5rbp8nk-3ef32.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-117-219.us-west-2.compute.internal?timeout=10s - read tcp 10.0.117.219:47486->10.0.72.212:6443: read: connection reset by peer
2026-04-20T20:04:37Z node/ip-10-0-117-219.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i5rbp8nk-3ef32.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-117-219.us-west-2.compute.internal?timeout=10s - read tcp 10.0.117.219:39998->10.0.49.46:6443: read: connection reset by peer
2026-04-20T20:11:27Z node/ip-10-0-30-219.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i5rbp8nk-3ef32.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-219.us-west-2.compute.internal?timeout=10s - read tcp 10.0.30.219:39558->10.0.49.46:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.15-upgrade-from-stable-4.14-e2e-aws-sdn-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046275144631980032junit52 minutes ago
2026-04-20T18:28:06Z node/ip-10-0-13-208.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fw09zv73-51dde.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-208.ec2.internal?timeout=10s - read tcp 10.0.13.208:36584->10.0.101.225:6443: read: connection reset by peer
2026-04-20T18:32:19Z node/ip-10-0-8-43.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fw09zv73-51dde.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-43.ec2.internal?timeout=10s - read tcp 10.0.8.43:55808->10.0.101.225:6443: read: connection reset by peer
2026-04-20T18:32:30Z node/ip-10-0-25-168.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fw09zv73-51dde.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-168.ec2.internal?timeout=10s - read tcp 10.0.25.168:46828->10.0.101.225:6443: read: connection reset by peer
2026-04-20T18:32:30Z node/ip-10-0-13-208.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fw09zv73-51dde.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-208.ec2.internal?timeout=10s - read tcp 10.0.13.208:33066->10.0.44.250:6443: read: connection reset by peer
#2046275144631980032junit52 minutes ago
2026-04-20T18:28:06Z node/ip-10-0-13-208.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fw09zv73-51dde.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-208.ec2.internal?timeout=10s - read tcp 10.0.13.208:36584->10.0.101.225:6443: read: connection reset by peer
2026-04-20T18:32:19Z node/ip-10-0-8-43.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fw09zv73-51dde.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-43.ec2.internal?timeout=10s - read tcp 10.0.8.43:55808->10.0.101.225:6443: read: connection reset by peer
2026-04-20T18:32:30Z node/ip-10-0-25-168.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fw09zv73-51dde.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-168.ec2.internal?timeout=10s - read tcp 10.0.25.168:46828->10.0.101.225:6443: read: connection reset by peer
2026-04-20T18:32:30Z node/ip-10-0-13-208.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fw09zv73-51dde.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-208.ec2.internal?timeout=10s - read tcp 10.0.13.208:33066->10.0.44.250:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.20-e2e-gcp-cilium (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046275538145775616junit54 minutes ago
    STEP: creating a large number of resources @ 04/20/26 18:43:48.521
  I0420 18:43:53.222979 3307 chunking.go:66] Got an error creating template 143: Post "https://api.ci-op-kxsgjpd6-d5439.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/e2e-chunking-7185/podtemplates": read tcp 10.131.16.7:40928->34.49.130.48:6443: read: connection reset by peer
  I0420 18:43:53.223015 3307 chunking.go:66] Got an error creating template 142: Post "https://api.ci-op-kxsgjpd6-d5439.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/e2e-chunking-7185/podtemplates": read tcp 10.131.16.7:40928->34.49.130.48:6443: read: connection reset by peer
  I0420 18:43:54.289388 3307 chunking.go:66] Got an error creating template 142: podtemplates "template-0142" already exists
periodic-ci-openshift-release-main-nightly-4.15-upgrade-from-stable-4.14-e2e-aws-sdn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#2046276050815553536junit55 minutes ago
2026-04-20T18:25:06Z node/ip-10-0-68-17.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-yq5fr7r1-9186a.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-68-17.us-east-2.compute.internal?timeout=10s - read tcp 10.0.68.17:46028->10.0.57.248:6443: read: connection reset by peer
2026-04-20T18:29:02Z node/ip-10-0-46-84.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-yq5fr7r1-9186a.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-46-84.us-east-2.compute.internal?timeout=10s - read tcp 10.0.46.84:56562->10.0.114.49:6443: read: connection reset by peer
2026-04-20T18:32:55Z node/ip-10-0-101-137.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-yq5fr7r1-9186a.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-101-137.us-east-2.compute.internal?timeout=10s - read tcp 10.0.101.137:38340->10.0.57.248:6443: read: connection reset by peer
#2046276050815553536junit55 minutes ago
2026-04-20T18:25:06Z node/ip-10-0-68-17.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-yq5fr7r1-9186a.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-68-17.us-east-2.compute.internal?timeout=10s - read tcp 10.0.68.17:46028->10.0.57.248:6443: read: connection reset by peer
2026-04-20T18:29:02Z node/ip-10-0-46-84.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-yq5fr7r1-9186a.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-46-84.us-east-2.compute.internal?timeout=10s - read tcp 10.0.46.84:56562->10.0.114.49:6443: read: connection reset by peer
2026-04-20T18:32:55Z node/ip-10-0-101-137.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-yq5fr7r1-9186a.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-101-137.us-east-2.compute.internal?timeout=10s - read tcp 10.0.101.137:38340->10.0.57.248:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.13-e2e-gcp-sdn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#2046274678074380288junitAbout an hour ago
Apr 20 18:27:05.752 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ff4gv46d-84a1c-8pl7k-master-2 node/ci-op-ff4gv46d-84a1c-8pl7k-master-2 uid/bffa7cd9-cbfa-49b9-b0b6-767b273da890 container/kube-apiserver-cert-syncer reason/ContainerExit code/2 cause/Error rue} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0420 18:21:44.541106       1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nI0420 18:21:44.541522       1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0420 18:21:45.925405       1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nI0420 18:21:45.925860       1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 20 18:27:22.088 - 999ms E disruption/openshift-api connection/new reason/DisruptionBegan disruption/openshift-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-ff4gv46d-84a1c.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": read tcp 10.129.22.52:44210->35.226.172.235:6443: read: connection reset by peer
Apr 20 18:30:00.026 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ff4gv46d-84a1c-8pl7k-master-1 node/ci-op-ff4gv46d-84a1c-8pl7k-master-1 uid/8b5fb9b8-e337-440e-8d2e-cbfba0acdcec container/kube-apiserver-cert-syncer reason/ContainerExit code/2 cause/Error rue} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0420 18:21:44.542347       1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nI0420 18:21:44.543275       1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0420 18:21:45.926560       1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nI0420 18:21:45.927057       1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
periodic-ci-openshift-release-main-nightly-5.0-e2e-metal-ipi-upgrade-ovn-ipv6 (all) - 8 runs, 0% failed, 13% of runs match
#2046258964168970240junitAbout an hour ago
2026-04-20T18:35:39Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
2026-04-20T18:35:54Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::18]:53508->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-20T18:35:56Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::14]:47228->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-20T18:36:03Z node/master-2.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2.ostest.test.metalkube.org?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046258964168970240junitAbout an hour ago
2026-04-20T18:36:29Z node/master-2.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2.ostest.test.metalkube.org?timeout=10s - http2: client connection lost
2026-04-20T18:42:11Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::17]:57066->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-20T18:42:11Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::18]:35348->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-20T18:42:22Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046258964168970240junitAbout an hour ago
2026-04-20T18:35:39Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
2026-04-20T18:35:54Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::18]:53508->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-20T18:35:56Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::14]:47228->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-20T18:36:03Z node/master-2.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2.ostest.test.metalkube.org?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-ci-4.15-e2e-aws-sdn-cgroupsv2 (all) - 1 runs, 0% failed, 100% of runs match
#2046275072578031616junitAbout an hour ago
2026-04-20T18:37:36Z node/ip-10-0-77-215.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w07ltbfw-86822.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-77-215.us-east-2.compute.internal?timeout=10s - read tcp 10.0.77.215:38006->10.0.51.218:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.15-upgrade-from-stable-4.14-e2e-aws-sdn-upgrade-workload (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046275148922753024junitAbout an hour ago
Apr 20 19:02:14.884 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ConfigMap) openshift-cloud-network-config-controller/trusted-ca: failed to apply / update (/v1, Kind=ConfigMap) openshift-cloud-network-config-controller/trusted-ca: Patch "https://api-int.ci-op-x31v0xwd-b4bfa.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-cloud-network-config-controller/configmaps/trusted-ca?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.124.190:57388->10.0.26.156:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 20 19:02:14.884 - 48s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ConfigMap) openshift-cloud-network-config-controller/trusted-ca: failed to apply / update (/v1, Kind=ConfigMap) openshift-cloud-network-config-controller/trusted-ca: Patch "https://api-int.ci-op-x31v0xwd-b4bfa.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-cloud-network-config-controller/configmaps/trusted-ca?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.124.190:57388->10.0.26.156:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 20 19:03:02.907 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
Apr 20 19:05:28.650 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=Namespace) /openshift-ovn-kubernetes: failed to apply / update (/v1, Kind=Namespace) /openshift-ovn-kubernetes: Patch "https://api-int.ci-op-x31v0xwd-b4bfa.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-ovn-kubernetes?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.124.190:60226->10.0.76.17:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 20 19:05:28.650 - 48s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=Namespace) /openshift-ovn-kubernetes: failed to apply / update (/v1, Kind=Namespace) /openshift-ovn-kubernetes: Patch "https://api-int.ci-op-x31v0xwd-b4bfa.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-ovn-kubernetes?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.124.190:60226->10.0.76.17:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 20 19:06:16.650 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2046275148922753024junitAbout an hour ago
2026-04-20T19:04:58Z node/ip-10-0-70-154.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-x31v0xwd-b4bfa.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-70-154.ec2.internal?timeout=10s - read tcp 10.0.70.154:57358->10.0.76.17:6443: read: connection reset by peer
2026-04-20T19:04:59Z node/ip-10-0-124-190.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-x31v0xwd-b4bfa.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-124-190.ec2.internal?timeout=10s - read tcp 10.0.124.190:38584->10.0.76.17:6443: read: connection reset by peer
2026-04-20T19:05:11Z node/ip-10-0-0-206.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-x31v0xwd-b4bfa.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-206.ec2.internal?timeout=10s - read tcp 10.0.0.206:44978->10.0.26.156:6443: read: connection reset by peer
2026-04-20T19:34:07Z node/ip-10-0-0-206.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-x31v0xwd-b4bfa.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-206.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-ci-4.13-upgrade-from-stable-4.12-e2e-gcp-sdn-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046274725885251584junitAbout an hour ago
Apr 20 18:52:49.798 E ns/openshift-e2e-loki pod/loki-promtail-sp7vs node/ci-op-xtf97zbi-0bb1d-xg9gl-master-0 uid/d184fb46-2f57-4180-b815-3550d388e50b container/oauth-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Apr 20 18:53:01.983 - 1s    E disruption/cache-openshift-api connection/new reason/DisruptionBegan disruption/cache-openshift-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-xtf97zbi-0bb1d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams?resourceVersion=0": read tcp 10.129.22.32:49132->34.9.237.76:6443: read: connection reset by peer
Apr 20 18:53:06.982 - 1s    E disruption/cache-kube-api connection/new reason/DisruptionBegan disruption/cache-kube-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-xtf97zbi-0bb1d.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default?resourceVersion=0": read tcp 10.129.22.32:49418->34.9.237.76:6443: read: connection reset by peer
Apr 20 18:53:06.982 - 1s    E disruption/cache-openshift-api connection/new reason/DisruptionBegan disruption/cache-openshift-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-xtf97zbi-0bb1d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams?resourceVersion=0": read tcp 10.129.22.32:49414->34.9.237.76:6443: read: connection reset by peer
Apr 20 18:53:12.993 E ns/openshift-cluster-csi-drivers pod/gcp-pd-csi-driver-operator-75b4648557-7cdfm node/ci-op-xtf97zbi-0bb1d-xg9gl-master-1 uid/21cc6c3a-1a97-4e1e-8772-93b871510508 container/gcp-pd-csi-driver-operator reason/ContainerExit code/1 cause/Error
periodic-ci-openshift-release-main-ci-4.19-e2e-aws-ovn-proxy (all) - 1 runs, 0% failed, 100% of runs match
#2046275512149479424junitAbout an hour ago
2026-04-20T18:44:19Z node/ip-10-0-52-80.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4htnr2q4-94041.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-80.us-east-2.compute.internal?timeout=10s - read tcp 10.0.52.80:43312->10.0.59.131:6443: read: connection reset by peer
2026-04-20T18:44:19Z node/ip-10-0-68-198.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4htnr2q4-94041.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-68-198.us-east-2.compute.internal?timeout=10s - read tcp 10.0.68.198:53636->10.0.64.46:6443: read: connection reset by peer
periodic-ci-openshift-hypershift-release-5.0-periodics-e2e-aws-ovn-conformance (all) - 75 runs, 65% failed, 20% of failures match = 13% impact
#2046259273154957312junitAbout an hour ago
# [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]
fail [k8s.io/kubernetes/test/e2e/framework/framework.go:396]: Couldn't delete ns: "e2e-deployment-6053": Delete "https://a100e546a8f6e4a79ad5655f3db2c12e-591aab8fb1d842d5.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/e2e-deployment-6053": read tcp 172.24.62.7:34650->54.88.98.105:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://a100e546a8f6e4a79ad5655f3db2c12e-591aab8fb1d842d5.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/e2e-deployment-6053", Err:(*net.OpError)(0xc0013f0550)})
#2046259278355894272junit3 hours ago
# [sig-storage] In-tree Volumes [Driver: local] [LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
fail [k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:194]: failed to delete pod hostexec-ip-10-0-14-165.ec2.internal-j8qvv in namespace e2e-provisioning-6779: Delete "https://a988ed734769b4de6ac2affcd229c076-c80b977c82a4503e.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/e2e-provisioning-6779/pods/hostexec-ip-10-0-14-165.ec2.internal-j8qvv": read tcp 172.24.6.89:35580->34.205.46.236:6443: read: connection reset by peer
#2046041510377426944junit17 hours ago
# [sig-arch] [Conformance] sysctl whitelists net.ipv4.ping_group_range [Suite:openshift/conformance/parallel/minimal]
fail [github.com/openshift/origin/test/extended/security/sysctl.go:70]: failed to create new exec pod in namespace: e2e-test-sysctl-fwb9q: Post "https://acf4aeeff67a84b90b7c09ab07006eeb-c28208db8a7c0a3c.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/e2e-test-sysctl-fwb9q/pods": read tcp 172.24.34.8:59392->32.193.170.135:6443: read: connection reset by peer
#2046041510377426944junit17 hours ago
    <*url.Error | 0xc00a8c3bc0>:
    Post "https://acf4aeeff67a84b90b7c09ab07006eeb-c28208db8a7c0a3c.elb.us-east-1.amazonaws.com:6443/apis/authorization.openshift.io/v1/namespaces/e2e-test-project-api-dgwq7/rolebindings": read tcp 172.24.34.8:59418->32.193.170.135:6443: read: connection reset by peer
    {
#2046041506183122944junit17 hours ago
If you don't see a command prompt, try pressing enter.
E0420 02:29:17.822387   20668 v2.go:167] "Unhandled Error" err="next reader: read tcp 172.24.43.185:39594->34.235.219.137:6443: read: connection reset by peer"
E0420 02:29:17.822401   20668 v2.go:129] "Unhandled Error" err="next reader: read tcp 172.24.43.185:39594->34.235.219.137:6443: read: connection reset by peer"
E0420 02:29:17.822419   20668 v2.go:150] "Unhandled Error" err="next reader: read tcp 172.24.43.185:39594->34.235.219.137:6443: read: connection reset by peer"
warning: couldn't attach to pod/run-test, falling back to streaming logs: error reading from error stream: next reader: read tcp 172.24.43.185:39594->34.235.219.137:6443: read: connection reset by peer
E0420 02:29:20.525976   20668 websocket.go:514] Websocket Ping failed: set tcp 172.24.43.185:39594: use of closed network connection
#2046041495407955968junit17 hours ago
2026-04-20T03:38:15Z node/ip-10-0-8-15.ec2.internal - reason/FailedToUpdateLease https://aab3a7742bd90485f9c7dc9dfb9c0f32-01215e0b5d321afd.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-15.ec2.internal?timeout=10s - read tcp 10.0.8.15:35396->3.93.225.234:6443: read: connection reset by peer
#2046041513728675840junit17 hours ago
# [sig-cli] Kubectl exec should be able to execute 1000 times in a container
fail [k8s.io/kubernetes/test/e2e/kubectl/exec.go:93]: Exec failed 144 times with following errors : [context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded dial tcp 54.88.59.162:6443: connect: connection refused dial tcp 54.211.178.146:6443: connect: connection refused dial tcp 52.203.36.133:6443: connect: connection refused dial tcp 54.88.59.162:6443: connect: connection refused dial tcp 54.88.59.162:6443: connect: connection refused dial tcp 54.88.59.162:6443: connect: connection refused dial tcp 54.88.59.162:6443: connect: connection refused dial tcp 54.88.59.162:6443: i/o timeout dial tcp 54.88.59.162:6443: i/o timeout dial tcp 52.203.36.133:6443: connect: connection refused dial tcp 54.88.59.162:6443: connect: connection refused dial tcp 54.211.178.146:6443: connect: connection refused dial tcp 54.88.59.162:6443: i/o timeout dial tcp 54.211.178.146:6443: connect: connection refused dial tcp 54.88.59.162:6443: connect: connection refused dial tcp 52.203.36.133:6443: i/o timeout dial tcp 54.211.178.146:6443: connect: connection refused dial tcp 54.211.178.146:6443: connect: connection refused dial tcp 52.203.36.133:6443: connect: connection refused dial tcp 54.88.59.162:6443: i/o timeout dial tcp 54.88.59.162:6443: connect: connection refused dial tcp 54.211.178.146:6443: connect: connection refused dial tcp 52.203.36.133:6443: connect: connection refused dial tcp 54.211.178.146:6443: connect: connection refused dial tcp 54.88.59.162:6443: i/o timeout dial tcp 54.211.178.146:6443: connect: connection refused dial tcp 54.88.59.162:6443: i/o timeout dial tcp 54.88.59.162:6443: connect: connection refused dial tcp 52.203.36.133:6443: i/o timeout dial tcp 54.211.178.146:6443: connect: connection refused dial tcp 54.88.59.162:6443: i/o timeout dial tcp 52.203.36.133:6443: i/o timeout context deadline exceeded dial tcp 52.203.36.133:6443: i/o timeout context deadline exceeded context deadline exceeded context deadline exceeded context deadline exceeded dial tcp 52.203.36.133:6443: connect: connection refused context deadline exceeded dial tcp 54.211.178.146:6443: connect: connection refused context deadline exceeded context deadline exceeded dial tcp 54.211.178.146:6443: connect: connection refused context deadline exceeded Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused Internal error occurred: error sending request: Post "https://10.0.8.37:10250/exec/e2e-exec-4277/test-exec-pod/agnhost-container?command=date&error=1&output=1": dial tcp 172.30.135.85:8090: connect: connection refused read tcp 172.24.43.201:37264->54.88.59.162:6443: read: connection reset by peer read tcp 172.24.43.201:32926->54.88.59.162:6443: read: connection reset by peer read tcp 172.24.43.201:33116->54.88.59.162:6443: read: connection reset by peer context deadline exceeded context deadline exceeded read tcp 172.24.43.201:33494->54.88.59.162:6443: read: connection reset by peer read tcp 172.24.43.201:33480->54.88.59.162:6443: read: connection reset by peer read tcp 172.24.43.201:33530->54.88.59.162:6443: read: connection reset by peer read tcp 172.24.43.201:33536->54.88.59.162:6443: read: connection reset by peer context deadline exceeded]
#2045668333046468608junit42 hours ago
# [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1
fail [k8s.io/kubernetes/test/e2e/storage/utils/local.go:241]: error reading from error stream: next reader: read tcp 172.24.178.8:42396->44.207.44.1:6443: read: connection reset by peer
#2045668328831193088junit42 hours ago
2026-04-19T02:22:27Z node/ip-10-0-3-120.ec2.internal - reason/FailedToUpdateLease https://a452a9fcef3ae4e8ea4fa33b4046d0e5-53198b83b6f1c631.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-3-120.ec2.internal?timeout=10s - context deadline exceeded
2026-04-19T02:56:30Z node/ip-10-0-3-120.ec2.internal - reason/FailedToUpdateLease https://a452a9fcef3ae4e8ea4fa33b4046d0e5-53198b83b6f1c631.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-3-120.ec2.internal?timeout=10s - read tcp 10.0.3.120:44054->18.232.40.1:6443: read: connection reset by peer
#2045668324641083392junit42 hours ago
# [sig-storage] CSI Mock volume expansion Expansion with recovery should record target size in allocated resources
fail [k8s.io/kubernetes/test/e2e/storage/csimock/base.go:277]: while cleaning up after test: Delete "https://afa2968f4c89f42328d1bdb224ad618d-b69ea3bf827f2f74.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/e2e-csi-mock-volumes-expansion-3429/resourcequotas/e2e-csi-mock-volumes-expansion-3429-quota": read tcp 172.24.180.8:48418->107.22.175.68:6443: read: connection reset by peer
#2045668321310806016junit42 hours ago
    <*url.Error | 0xc002f020f0>:
    Post "https://a0fa5ad7378354a9b87cdae7bf256ea2-39cf0f281b9a056e.elb.us-east-1.amazonaws.com:6443/apis/route.openshift.io/v1/namespaces/e2e-test-router-config-manager-4ql87/routes": read tcp 172.24.170.9:38636->54.158.107.228:6443: read: connection reset by peer
    {
periodic-ci-openshift-release-main-nightly-4.19-e2e-gcp-ovn-etcd-scaling (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046277484382523392junitAbout an hour ago
2026-04-20T18:59:20Z node/ci-op-lntsjc5g-b43e4-zw6k2-worker-b-w79dk - reason/FailedToUpdateLease https://api-int.ci-op-lntsjc5g-b43e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-lntsjc5g-b43e4-zw6k2-worker-b-w79dk?timeout=10s - read tcp 10.0.128.4:49728->10.0.0.2:6443: read: connection reset by peer
2026-04-20T18:59:32Z node/ci-op-lntsjc5g-b43e4-zw6k2-master-jdvlw-0 - reason/FailedToUpdateLease https://api-int.ci-op-lntsjc5g-b43e4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-lntsjc5g-b43e4-zw6k2-master-jdvlw-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-ci-4.21-upgrade-from-stable-4.20-e2e-aws-ovn-upgrade (all) - 13 runs, 46% failed, 217% of failures match = 100% impact
#2046252179659952128junitAbout an hour ago
2026-04-20T16:50:17Z node/ip-10-0-81-159.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-tz76jw6k-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-81-159.us-west-1.compute.internal?timeout=10s - read tcp 10.0.81.159:50682->10.0.63.108:6443: read: connection reset by peer
2026-04-20T16:58:07Z node/ip-10-0-64-11.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-tz76jw6k-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-64-11.us-west-1.compute.internal?timeout=10s - read tcp 10.0.64.11:40636->10.0.63.108:6443: read: connection reset by peer
2026-04-20T16:58:07Z node/ip-10-0-81-159.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-tz76jw6k-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-81-159.us-west-1.compute.internal?timeout=10s - read tcp 10.0.81.159:51662->10.0.63.108:6443: read: connection reset by peer
2026-04-20T16:58:10Z node/ip-10-0-58-247.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-tz76jw6k-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-247.us-west-1.compute.internal?timeout=10s - read tcp 10.0.58.247:48586->10.0.95.207:6443: read: connection reset by peer
2026-04-20T16:58:16Z node/ip-10-0-33-67.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-tz76jw6k-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-33-67.us-west-1.compute.internal?timeout=10s - read tcp 10.0.33.67:48248->10.0.63.108:6443: read: connection reset by peer
2026-04-20T17:37:07Z node/ip-10-0-83-180.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-tz76jw6k-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-83-180.us-west-1.compute.internal?timeout=10s - read tcp 10.0.83.180:48932->10.0.95.207:6443: read: connection reset by peer
#2046220405814857728junit3 hours ago
2026-04-20T14:48:43Z node/ip-10-0-24-60.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mim6z3wy-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-60.us-west-1.compute.internal?timeout=10s - read tcp 10.0.24.60:56804->10.0.5.223:6443: read: connection reset by peer
#2046220405814857728junit3 hours ago
Apr 20 15:41:48.942 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io: failed to apply / update (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io: Patch "https://api-int.ci-op-mim6z3wy-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/adminnetworkpolicies.policy.networking.k8s.io?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.101.114:58306->10.0.5.223:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 20 15:41:48.942 - 28s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io: failed to apply / update (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io: Patch "https://api-int.ci-op-mim6z3wy-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/adminnetworkpolicies.policy.networking.k8s.io?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.101.114:58306->10.0.5.223:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 20 15:42:16.944 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2046220405814857728junit3 hours ago
2026-04-20T14:48:43Z node/ip-10-0-24-60.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mim6z3wy-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-60.us-west-1.compute.internal?timeout=10s - read tcp 10.0.24.60:56804->10.0.5.223:6443: read: connection reset by peer
#2046190522229329920junit6 hours ago
2026-04-20T12:42:05Z node/ip-10-0-15-47.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-j9hmgtgd-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-15-47.ec2.internal?timeout=10s - read tcp 10.0.15.47:41812->10.0.36.183:6443: read: connection reset by peer
2026-04-20T12:42:05Z node/ip-10-0-90-83.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-j9hmgtgd-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-83.ec2.internal?timeout=10s - read tcp 10.0.90.83:54760->10.0.97.81:6443: read: connection reset by peer
2026-04-20T12:46:20Z node/ip-10-0-15-47.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-j9hmgtgd-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-15-47.ec2.internal?timeout=10s - read tcp 10.0.15.47:38636->10.0.36.183:6443: read: connection reset by peer
2026-04-20T12:46:20Z node/ip-10-0-15-47.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-j9hmgtgd-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-15-47.ec2.internal?timeout=10s - read tcp 10.0.15.47:38786->10.0.36.183:6443: read: connection reset by peer
2026-04-20T12:46:20Z node/ip-10-0-90-83.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-j9hmgtgd-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-83.ec2.internal?timeout=10s - read tcp 10.0.90.83:38624->10.0.36.183:6443: read: connection reset by peer
2026-04-20T12:50:45Z node/ip-10-0-15-47.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-j9hmgtgd-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-15-47.ec2.internal?timeout=10s - read tcp 10.0.15.47:43222->10.0.36.183:6443: read: connection reset by peer
2026-04-20T13:35:50Z node/ip-10-0-44-21.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-j9hmgtgd-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-44-21.ec2.internal?timeout=10s - read tcp 10.0.44.21:39972->10.0.36.183:6443: read: connection reset by peer
#2046101008924282880junit11 hours ago
2026-04-20T06:27:45Z node/ip-10-0-33-96.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0mditpbd-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-33-96.us-west-2.compute.internal?timeout=10s - read tcp 10.0.33.96:42102->10.0.67.87:6443: read: connection reset by peer
2026-04-20T06:27:48Z node/ip-10-0-109-118.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0mditpbd-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-118.us-west-2.compute.internal?timeout=10s - read tcp 10.0.109.118:36812->10.0.60.122:6443: read: connection reset by peer
2026-04-20T06:27:49Z node/ip-10-0-127-15.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0mditpbd-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-15.us-west-2.compute.internal?timeout=10s - read tcp 10.0.127.15:57118->10.0.60.122:6443: read: connection reset by peer
2026-04-20T06:49:27Z node/ip-10-0-127-15.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0mditpbd-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-15.us-west-2.compute.internal?timeout=10s - read tcp 10.0.127.15:49878->10.0.67.87:6443: read: connection reset by peer
2026-04-20T06:53:28Z node/ip-10-0-33-96.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0mditpbd-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-33-96.us-west-2.compute.internal?timeout=10s - read tcp 10.0.33.96:48366->10.0.60.122:6443: read: connection reset by peer
2026-04-20T06:53:38Z node/ip-10-0-85-60.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0mditpbd-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-85-60.us-west-2.compute.internal?timeout=10s - read tcp 10.0.85.60:39350->10.0.67.87:6443: read: connection reset by peer
2026-04-20T06:53:41Z node/ip-10-0-119-204.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0mditpbd-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-119-204.us-west-2.compute.internal?timeout=10s - read tcp 10.0.119.204:57648->10.0.67.87:6443: read: connection reset by peer
2026-04-20T07:46:29Z node/ip-10-0-127-15.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0mditpbd-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-15.us-west-2.compute.internal?timeout=10s - read tcp 10.0.127.15:48544->10.0.60.122:6443: read: connection reset by peer
2026-04-20T07:46:38Z node/ip-10-0-33-96.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0mditpbd-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-33-96.us-west-2.compute.internal?timeout=10s - read tcp 10.0.33.96:47874->10.0.67.87:6443: read: connection reset by peer
#2046097389336399872junit12 hours ago
2026-04-20T06:28:15Z node/ip-10-0-57-200.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-psqs478z-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-57-200.ec2.internal?timeout=10s - read tcp 10.0.57.200:39150->10.0.51.110:6443: read: connection reset by peer
2026-04-20T06:31:53Z node/ip-10-0-124-173.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-psqs478z-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-124-173.ec2.internal?timeout=10s - read tcp 10.0.124.173:37580->10.0.51.110:6443: read: connection reset by peer
2026-04-20T06:47:28Z node/ip-10-0-50-133.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-psqs478z-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-50-133.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T07:08:23Z node/ip-10-0-50-133.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-psqs478z-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-50-133.ec2.internal?timeout=10s - read tcp 10.0.50.133:54136->10.0.91.186:6443: read: connection reset by peer
2026-04-20T07:23:08Z node/ip-10-0-59-248.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-psqs478z-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-59-248.ec2.internal?timeout=10s - read tcp 10.0.59.248:40948->10.0.51.110:6443: read: connection reset by peer
#2046097389336399872junit12 hours ago
2026-04-20T06:28:15Z node/ip-10-0-57-200.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-psqs478z-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-57-200.ec2.internal?timeout=10s - read tcp 10.0.57.200:39150->10.0.51.110:6443: read: connection reset by peer
2026-04-20T06:31:53Z node/ip-10-0-124-173.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-psqs478z-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-124-173.ec2.internal?timeout=10s - read tcp 10.0.124.173:37580->10.0.51.110:6443: read: connection reset by peer
2026-04-20T06:47:28Z node/ip-10-0-50-133.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-psqs478z-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-50-133.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046036995683127296junit16 hours ago
2026-04-20T02:38:58Z node/ip-10-0-88-236.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4b14hxrc-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-88-236.us-east-2.compute.internal?timeout=10s - read tcp 10.0.88.236:40774->10.0.100.183:6443: read: connection reset by peer
2026-04-20T02:39:00Z node/ip-10-0-53-63.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4b14hxrc-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-63.us-east-2.compute.internal?timeout=10s - read tcp 10.0.53.63:51052->10.0.42.3:6443: read: connection reset by peer
2026-04-20T03:24:39Z node/ip-10-0-109-12.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4b14hxrc-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-12.us-east-2.compute.internal?timeout=10s - read tcp 10.0.109.12:37850->10.0.42.3:6443: read: connection reset by peer
2026-04-20T03:31:48Z node/ip-10-0-6-120.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4b14hxrc-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-120.us-east-2.compute.internal?timeout=10s - read tcp 10.0.6.120:51708->10.0.100.183:6443: read: connection reset by peer
#2046036995683127296junit16 hours ago
2026-04-20T02:38:58Z node/ip-10-0-88-236.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4b14hxrc-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-88-236.us-east-2.compute.internal?timeout=10s - read tcp 10.0.88.236:40774->10.0.100.183:6443: read: connection reset by peer
2026-04-20T02:39:00Z node/ip-10-0-53-63.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4b14hxrc-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-63.us-east-2.compute.internal?timeout=10s - read tcp 10.0.53.63:51052->10.0.42.3:6443: read: connection reset by peer
2026-04-20T03:24:39Z node/ip-10-0-109-12.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4b14hxrc-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-12.us-east-2.compute.internal?timeout=10s - read tcp 10.0.109.12:37850->10.0.42.3:6443: read: connection reset by peer
2026-04-20T03:31:48Z node/ip-10-0-6-120.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4b14hxrc-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-120.us-east-2.compute.internal?timeout=10s - read tcp 10.0.6.120:51708->10.0.100.183:6443: read: connection reset by peer
#2045980427734224896junit20 hours ago
2026-04-19T22:28:58Z node/ip-10-0-21-137.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ip9zh9kx-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-21-137.us-west-1.compute.internal?timeout=10s - read tcp 10.0.21.137:60398->10.0.55.233:6443: read: connection reset by peer
2026-04-19T22:29:04Z node/ip-10-0-13-189.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ip9zh9kx-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-189.us-west-1.compute.internal?timeout=10s - read tcp 10.0.13.189:57230->10.0.84.218:6443: read: connection reset by peer
2026-04-19T22:45:59Z node/ip-10-0-40-216.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ip9zh9kx-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-40-216.us-west-1.compute.internal?timeout=10s - read tcp 10.0.40.216:38034->10.0.55.233:6443: read: connection reset by peer
2026-04-19T22:46:04Z node/ip-10-0-13-189.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ip9zh9kx-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-189.us-west-1.compute.internal?timeout=10s - read tcp 10.0.13.189:35664->10.0.84.218:6443: read: connection reset by peer
2026-04-19T22:46:05Z node/ip-10-0-8-3.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ip9zh9kx-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-3.us-west-1.compute.internal?timeout=10s - read tcp 10.0.8.3:37434->10.0.55.233:6443: read: connection reset by peer
2026-04-19T22:46:20Z node/ip-10-0-40-216.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ip9zh9kx-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-40-216.us-west-1.compute.internal?timeout=10s - read tcp 10.0.40.216:56922->10.0.84.218:6443: read: connection reset by peer
2026-04-19T22:54:07Z node/ip-10-0-121-118.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ip9zh9kx-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-121-118.us-west-1.compute.internal?timeout=10s - read tcp 10.0.121.118:38902->10.0.84.218:6443: read: connection reset by peer
2026-04-19T23:31:16Z node/ip-10-0-8-3.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ip9zh9kx-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-3.us-west-1.compute.internal?timeout=10s - read tcp 10.0.8.3:53218->10.0.84.218:6443: read: connection reset by peer
2026-04-19T23:31:22Z node/ip-10-0-74-122.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ip9zh9kx-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-74-122.us-west-1.compute.internal?timeout=10s - read tcp 10.0.74.122:45852->10.0.84.218:6443: read: connection reset by peer
#2045975453260320768junit20 hours ago
2026-04-19T22:22:20Z node/ip-10-0-109-205.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-133vcrpm-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-205.ec2.internal?timeout=10s - read tcp 10.0.109.205:58628->10.0.112.110:6443: read: connection reset by peer
2026-04-19T22:26:26Z node/ip-10-0-99-199.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-133vcrpm-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-99-199.ec2.internal?timeout=10s - read tcp 10.0.99.199:59074->10.0.112.110:6443: read: connection reset by peer
2026-04-19T22:26:35Z node/ip-10-0-109-205.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-133vcrpm-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-205.ec2.internal?timeout=10s - read tcp 10.0.109.205:46818->10.0.57.41:6443: read: connection reset by peer
2026-04-19T22:26:35Z node/ip-10-0-55-238.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-133vcrpm-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-55-238.ec2.internal?timeout=10s - read tcp 10.0.55.238:56592->10.0.112.110:6443: read: connection reset by peer
2026-04-19T23:27:04Z node/ip-10-0-99-199.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-133vcrpm-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-99-199.ec2.internal?timeout=10s - read tcp 10.0.99.199:40540->10.0.112.110:6443: read: connection reset by peer
#2045975453260320768junit20 hours ago
2026-04-19T22:22:20Z node/ip-10-0-109-205.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-133vcrpm-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-205.ec2.internal?timeout=10s - read tcp 10.0.109.205:58628->10.0.112.110:6443: read: connection reset by peer
2026-04-19T22:26:26Z node/ip-10-0-99-199.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-133vcrpm-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-99-199.ec2.internal?timeout=10s - read tcp 10.0.99.199:59074->10.0.112.110:6443: read: connection reset by peer
2026-04-19T22:26:35Z node/ip-10-0-109-205.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-133vcrpm-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-205.ec2.internal?timeout=10s - read tcp 10.0.109.205:46818->10.0.57.41:6443: read: connection reset by peer
2026-04-19T22:26:35Z node/ip-10-0-55-238.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-133vcrpm-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-55-238.ec2.internal?timeout=10s - read tcp 10.0.55.238:56592->10.0.112.110:6443: read: connection reset by peer
2026-04-19T23:27:04Z node/ip-10-0-99-199.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-133vcrpm-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-99-199.ec2.internal?timeout=10s - read tcp 10.0.99.199:40540->10.0.112.110:6443: read: connection reset by peer
#2045884156541407232junit26 hours ago
2026-04-19T17:25:57Z node/ip-10-0-111-74.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5hz3pclm-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-111-74.us-west-1.compute.internal?timeout=10s - read tcp 10.0.111.74:58676->10.0.81.229:6443: read: connection reset by peer
#2045884156541407232junit26 hours ago
2026-04-19T16:05:25Z node/ip-10-0-44-219.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5hz3pclm-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-44-219.us-west-1.compute.internal?timeout=10s - read tcp 10.0.44.219:57714->10.0.9.82:6443: read: connection reset by peer
2026-04-19T16:05:32Z node/ip-10-0-50-122.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5hz3pclm-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-50-122.us-west-1.compute.internal?timeout=10s - read tcp 10.0.50.122:42618->10.0.9.82:6443: read: connection reset by peer
2026-04-19T16:05:36Z node/ip-10-0-114-131.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5hz3pclm-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-114-131.us-west-1.compute.internal?timeout=10s - read tcp 10.0.114.131:57260->10.0.9.82:6443: read: connection reset by peer
2026-04-19T17:25:57Z node/ip-10-0-111-74.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5hz3pclm-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-111-74.us-west-1.compute.internal?timeout=10s - read tcp 10.0.111.74:58676->10.0.81.229:6443: read: connection reset by peer
#2045873562044076032junit27 hours ago
2026-04-19T15:23:04Z node/ip-10-0-35-180.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37z8g0x0-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-180.us-east-2.compute.internal?timeout=10s - read tcp 10.0.35.180:60316->10.0.63.227:6443: read: connection reset by peer
2026-04-19T15:44:33Z node/ip-10-0-50-181.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37z8g0x0-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-50-181.us-east-2.compute.internal?timeout=10s - read tcp 10.0.50.181:37086->10.0.127.62:6443: read: connection reset by peer
2026-04-19T16:40:39Z node/ip-10-0-35-180.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37z8g0x0-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-180.us-east-2.compute.internal?timeout=10s - read tcp 10.0.35.180:57482->10.0.63.227:6443: read: connection reset by peer
2026-04-19T16:40:59Z node/ip-10-0-35-180.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37z8g0x0-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-180.us-east-2.compute.internal?timeout=10s - read tcp 10.0.35.180:33308->10.0.127.62:6443: read: connection reset by peer
#2045873562044076032junit27 hours ago
2026-04-19T15:23:04Z node/ip-10-0-35-180.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37z8g0x0-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-180.us-east-2.compute.internal?timeout=10s - read tcp 10.0.35.180:60316->10.0.63.227:6443: read: connection reset by peer
2026-04-19T15:44:33Z node/ip-10-0-50-181.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37z8g0x0-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-50-181.us-east-2.compute.internal?timeout=10s - read tcp 10.0.50.181:37086->10.0.127.62:6443: read: connection reset by peer
2026-04-19T16:40:39Z node/ip-10-0-35-180.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37z8g0x0-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-180.us-east-2.compute.internal?timeout=10s - read tcp 10.0.35.180:57482->10.0.63.227:6443: read: connection reset by peer
2026-04-19T16:40:59Z node/ip-10-0-35-180.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37z8g0x0-124c3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-180.us-east-2.compute.internal?timeout=10s - read tcp 10.0.35.180:33308->10.0.127.62:6443: read: connection reset by peer
#2045792351636426752junit32 hours ago
2026-04-19T10:17:33Z node/ip-10-0-52-32.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bjzr1h9f-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-32.ec2.internal?timeout=10s - read tcp 10.0.52.32:58880->10.0.68.87:6443: read: connection reset by peer
2026-04-19T10:21:21Z node/ip-10-0-85-129.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bjzr1h9f-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-85-129.ec2.internal?timeout=10s - read tcp 10.0.85.129:41198->10.0.68.87:6443: read: connection reset by peer
2026-04-19T10:21:29Z node/ip-10-0-52-32.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bjzr1h9f-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-32.ec2.internal?timeout=10s - read tcp 10.0.52.32:51810->10.0.45.246:6443: read: connection reset by peer
2026-04-19T10:21:58Z node/ip-10-0-117-72.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bjzr1h9f-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-117-72.ec2.internal?timeout=10s - read tcp 10.0.117.72:42424->10.0.68.87:6443: read: connection reset by peer
#2045792351636426752junit32 hours ago
Apr 19 11:06:10.420 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited: Patch "https://api-int.ci-op-bjzr1h9f-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-ovn-kubernetes/roles/openshift-ovn-kubernetes-control-plane-limited?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.51.167:51130->10.0.68.87:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 19 11:06:10.420 - 27s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited: Patch "https://api-int.ci-op-bjzr1h9f-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-ovn-kubernetes/roles/openshift-ovn-kubernetes-control-plane-limited?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.51.167:51130->10.0.68.87:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 19 11:06:38.404 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2045715900325171200junit37 hours ago
2026-04-19T05:07:54Z node/ip-10-0-13-74.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0t55svgf-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-74.us-west-2.compute.internal?timeout=10s - read tcp 10.0.13.74:56308->10.0.44.80:6443: read: connection reset by peer
2026-04-19T05:08:05Z node/ip-10-0-90-46.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0t55svgf-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-46.us-west-2.compute.internal?timeout=10s - read tcp 10.0.90.46:41920->10.0.105.127:6443: read: connection reset by peer
2026-04-19T05:08:14Z node/ip-10-0-5-221.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0t55svgf-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-221.us-west-2.compute.internal?timeout=10s - read tcp 10.0.5.221:46488->10.0.44.80:6443: read: connection reset by peer
2026-04-19T05:12:12Z node/ip-10-0-48-107.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0t55svgf-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-48-107.us-west-2.compute.internal?timeout=10s - read tcp 10.0.48.107:39812->10.0.44.80:6443: read: connection reset by peer
2026-04-19T05:34:05Z node/ip-10-0-90-46.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0t55svgf-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-46.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045715900325171200junit37 hours ago
2026-04-19T05:34:07Z node/ip-10-0-5-221.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0t55svgf-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-221.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T06:03:04Z node/ip-10-0-90-46.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0t55svgf-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-46.us-west-2.compute.internal?timeout=10s - read tcp 10.0.90.46:33834->10.0.105.127:6443: read: connection reset by peer
2026-04-19T06:06:41Z node/ip-10-0-53-86.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0t55svgf-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-86.us-west-2.compute.internal?timeout=10s - context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2026-04-19T06:11:18Z node/ip-10-0-117-47.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0t55svgf-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-117-47.us-west-2.compute.internal?timeout=10s - read tcp 10.0.117.47:51826->10.0.44.80:6443: read: connection reset by peer
#2045667587525709824junit40 hours ago
2026-04-19T02:02:10Z node/ip-10-0-26-206.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xyg6nhj6-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-206.us-west-1.compute.internal?timeout=10s - read tcp 10.0.26.206:53362->10.0.7.84:6443: read: connection reset by peer
2026-04-19T02:10:06Z node/ip-10-0-4-174.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xyg6nhj6-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-4-174.us-west-1.compute.internal?timeout=10s - read tcp 10.0.4.174:34466->10.0.7.84:6443: read: connection reset by peer
2026-04-19T02:10:08Z node/ip-10-0-85-197.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xyg6nhj6-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-85-197.us-west-1.compute.internal?timeout=10s - read tcp 10.0.85.197:41488->10.0.7.84:6443: read: connection reset by peer
2026-04-19T02:56:54Z node/ip-10-0-66-67.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xyg6nhj6-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-66-67.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T03:00:52Z node/ip-10-0-26-206.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xyg6nhj6-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-206.us-west-1.compute.internal?timeout=10s - read tcp 10.0.26.206:57096->10.0.109.164:6443: read: connection reset by peer
#2045667587525709824junit40 hours ago
2026-04-19T02:02:10Z node/ip-10-0-26-206.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xyg6nhj6-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-206.us-west-1.compute.internal?timeout=10s - read tcp 10.0.26.206:53362->10.0.7.84:6443: read: connection reset by peer
2026-04-19T02:10:06Z node/ip-10-0-4-174.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xyg6nhj6-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-4-174.us-west-1.compute.internal?timeout=10s - read tcp 10.0.4.174:34466->10.0.7.84:6443: read: connection reset by peer
2026-04-19T02:10:08Z node/ip-10-0-85-197.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xyg6nhj6-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-85-197.us-west-1.compute.internal?timeout=10s - read tcp 10.0.85.197:41488->10.0.7.84:6443: read: connection reset by peer
2026-04-19T02:56:54Z node/ip-10-0-66-67.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xyg6nhj6-124c3.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-66-67.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-ci-4.14-e2e-aws-sdn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#2046274841568350208junitAbout an hour ago
2026-04-20T17:59:39Z node/ip-10-0-91-8.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5f53m32x-47c36.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-91-8.ec2.internal?timeout=10s - read tcp 10.0.91.8:45930->10.0.107.144:6443: read: connection reset by peer
#2046274841568350208junitAbout an hour ago
2026-04-20T17:59:39Z node/ip-10-0-91-8.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5f53m32x-47c36.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-91-8.ec2.internal?timeout=10s - read tcp 10.0.91.8:45930->10.0.107.144:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.22-upgrade-from-stable-4.21-e2e-metal-ipi-upgrade-ovn-ipv6 (all) - 8 runs, 13% failed, 100% of failures match = 13% impact
#2046237027627700224junitAbout an hour ago
2026-04-20T16:59:01Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T17:38:02Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::18]:46082->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-20T17:38:03Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::17]:54500->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-20T17:38:11Z node/master-2.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2.ostest.test.metalkube.org?timeout=10s - context deadline exceeded
#2046237027627700224junitAbout an hour ago
2026-04-20T16:59:01Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T17:38:02Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::18]:46082->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-20T17:38:03Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::17]:54500->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-20T17:38:11Z node/master-2.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2.ostest.test.metalkube.org?timeout=10s - context deadline exceeded
periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-gcp-ovn-rt-upgrade (all) - 73 runs, 36% failed, 23% of failures match = 8% impact
#2046237098108784640junit2 hours ago
    <*fmt.wrapError | 0xc009e86c80>:
    error reading from error stream: next reader: read tcp 10.131.168.7:49146->34.54.14.149:6443: read: connection reset by peer
    {
        msg: "error reading from error stream: next reader: read tcp 10.131.168.7:49146->34.54.14.149:6443: read: connection reset by peer",
        err: <*fmt.wrapError | 0xc009e86c60>{
            msg: "next reader: read tcp 10.131.168.7:49146->34.54.14.149:6443: read: connection reset by peer",
            err: <*net.OpError | 0xc002934140>{
#2046237109001392128junit2 hours ago
# [sig-node] [DRA] control plane supports sharing a claim concurrently
fail [k8s.io/kubernetes/test/e2e/dra/utils/builder.go:482]: create *v1.DeviceClass: Post "https://api.ci-op-x4xlzpbr-cd7af.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/resource.k8s.io/v1/deviceclasses": read tcp 10.128.170.7:58116->34.144.195.176:6443: read: connection reset by peer
#2046044878428704768junit14 hours ago
2026-04-20T02:44:14Z node/ci-op-f6fj182f-cd7af-pnt2z-worker-f-68mhl - reason/FailedToUpdateLease https://api-int.ci-op-f6fj182f-cd7af.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-f6fj182f-cd7af-pnt2z-worker-f-68mhl?timeout=10s - read tcp 10.0.128.3:39538->10.0.0.2:6443: read: connection reset by peer
#2046044878428704768junit14 hours ago
2026-04-20T02:44:14Z node/ci-op-f6fj182f-cd7af-pnt2z-worker-f-68mhl - reason/FailedToUpdateLease https://api-int.ci-op-f6fj182f-cd7af.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-f6fj182f-cd7af-pnt2z-worker-f-68mhl?timeout=10s - read tcp 10.0.128.3:39538->10.0.0.2:6443: read: connection reset by peer
#2045948930973241344junit21 hours ago
# [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
fail [k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:664]: Creating mutating webhook configuration: unexpected error when reading response body. Please retry. Original error: read tcp 10.131.222.7:36416->34.160.215.239:6443: read: connection reset by peer
#2045948943807811584junit21 hours ago
# [sig-network-edge][Feature:Idling] Unidling [apigroup:apps.openshift.io][apigroup:route.openshift.io] should work with TCP (when fully idled) [Suite:openshift/conformance/parallel]
fail [github.com/openshift/origin/test/extended/idling/idling.go:260]: failed to create new exec pod in namespace: e2e-test-cli-idling-2fw59: Post "https://api.ci-op-lbf4g571-cd7af.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/e2e-test-cli-idling-2fw59/pods": read tcp 10.131.222.10:36866->34.102.183.13:6443: read: connection reset by peer
#2045833240190652416junit28 hours ago
# [sig-operator] OLM should have imagePullPolicy:IfNotPresent on thier deployments [Suite:openshift/conformance/parallel]
fail [github.com/openshift/origin/test/extended/olm/olm.go:107]: Unable to get E0419 15:57:32.162382  109077 request.go:1196] "Unexpected error when reading response body" err="read tcp 10.128.182.8:47068->34.8.228.181:6443: read: connection reset by peer"
error: unexpected error when reading response body. Please retry. Original error: read tcp 10.128.182.8:47068->34.8.228.181:6443: read: connection reset by peer, error:Error running oc --kubeconfig=/tmp/kubeconfig-4220350202 get -n openshift-operator-lifecycle-manager deployment packageserver -o=jsonpath={.spec.template.spec.containers[?(@.name=="packageserver")].imagePullPolicy}:
StdOut>
E0419 15:57:32.162382  109077 request.go:1196] "Unexpected error when reading response body" err="read tcp 10.128.182.8:47068->34.8.228.181:6443: read: connection reset by peer"
error: unexpected error when reading response body. Please retry. Original error: read tcp 10.128.182.8:47068->34.8.228.181:6443: read: connection reset by peer
StdErr>
E0419 15:57:32.162382  109077 request.go:1196] "Unexpected error when reading response body" err="read tcp 10.128.182.8:47068->34.8.228.181:6443: read: connection reset by peer"
error: unexpected error when reading response body. Please retry. Original error: read tcp 10.128.182.8:47068->34.8.228.181:6443: read: connection reset by peer
exit status 1
periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-aws-ovn-conformance (all) - 72 runs, 51% failed, 14% of failures match = 7% impact
#2046237065384824832junit2 hours ago
# [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
fail [:0]: DeferCleanup callback returned error: Delete "https://ac50522e5b49d49538c9987c002b319c-54ed41f41180339f.elb.us-east-1.amazonaws.com:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/psp-csi-hostpath-role-e2e-ephemeral-6428": read tcp 10.129.15.216:45734->3.217.147.82:6443: read: connection reset by peer
#2046237065384824832junit2 hours ago
2026-04-20T16:11:56Z node/ip-10-0-11-249.ec2.internal - reason/FailedToUpdateLease https://ac50522e5b49d49538c9987c002b319c-54ed41f41180339f.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-249.ec2.internal?timeout=10s - context deadline exceeded
2026-04-20T16:12:00Z node/ip-10-0-11-249.ec2.internal - reason/FailedToUpdateLease https://ac50522e5b49d49538c9987c002b319c-54ed41f41180339f.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-249.ec2.internal?timeout=10s - read tcp 10.0.11.249:37210->3.217.147.82:6443: read: connection reset by peer
2026-04-20T16:12:00Z node/ip-10-0-11-249.ec2.internal - reason/FailedToUpdateLease https://ac50522e5b49d49538c9987c002b319c-54ed41f41180339f.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-249.ec2.internal?timeout=10s - read tcp 10.0.11.249:37284->3.217.147.82:6443: read: connection reset by peer
2026-04-20T16:12:05Z node/ip-10-0-7-56.ec2.internal - reason/FailedToUpdateLease https://ac50522e5b49d49538c9987c002b319c-54ed41f41180339f.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-7-56.ec2.internal?timeout=10s - context deadline exceeded (Client.Timeout exceeded while awaiting headers)
#2046237065384824832junit2 hours ago
2026-04-20T16:28:25Z node/ip-10-0-11-249.ec2.internal - reason/FailedToUpdateLease https://ac50522e5b49d49538c9987c002b319c-54ed41f41180339f.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-249.ec2.internal?timeout=10s - context deadline exceeded
2026-04-20T16:30:05Z node/ip-10-0-11-249.ec2.internal - reason/FailedToUpdateLease https://ac50522e5b49d49538c9987c002b319c-54ed41f41180339f.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-249.ec2.internal?timeout=10s - read tcp 10.0.11.249:35380->3.217.147.82:6443: read: connection reset by peer
2026-04-20T16:30:28Z node/ip-10-0-11-249.ec2.internal - reason/FailedToUpdateLease https://ac50522e5b49d49538c9987c002b319c-54ed41f41180339f.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-249.ec2.internal?timeout=10s - read tcp 10.0.11.249:35388->3.217.147.82:6443: read: connection reset by peer
2026-04-20T16:30:30Z node/ip-10-0-7-30.ec2.internal - reason/FailedToUpdateLease https://ac50522e5b49d49538c9987c002b319c-54ed41f41180339f.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-7-30.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046237065384824832junit2 hours ago
2026-04-20T17:13:35Z node/ip-10-0-11-249.ec2.internal - reason/FailedToUpdateLease https://ac50522e5b49d49538c9987c002b319c-54ed41f41180339f.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-249.ec2.internal?timeout=10s - context deadline exceeded
2026-04-20T17:13:58Z node/ip-10-0-7-56.ec2.internal - reason/FailedToUpdateLease https://ac50522e5b49d49538c9987c002b319c-54ed41f41180339f.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-7-56.ec2.internal?timeout=10s - read tcp 10.0.7.56:40190->3.217.147.82:6443: read: connection reset by peer
2026-04-20T17:13:59Z node/ip-10-0-7-56.ec2.internal - reason/FailedToUpdateLease https://ac50522e5b49d49538c9987c002b319c-54ed41f41180339f.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-7-56.ec2.internal?timeout=10s - read tcp 10.0.7.56:39536->3.217.147.82:6443: read: connection reset by peer
2026-04-20T17:14:08Z node/ip-10-0-7-30.ec2.internal - reason/FailedToUpdateLease https://ac50522e5b49d49538c9987c002b319c-54ed41f41180339f.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-7-30.ec2.internal?timeout=10s - net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
#2046237046963441664junit2 hours ago
2026-04-20T17:21:27Z node/ip-10-0-11-1.ec2.internal - reason/FailedToUpdateLease https://a3c6ff3fdb78f4928b419116d0110f73-90e235fd09beb5d3.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-1.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T17:22:29Z node/ip-10-0-11-1.ec2.internal - reason/FailedToUpdateLease https://a3c6ff3fdb78f4928b419116d0110f73-90e235fd09beb5d3.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-1.ec2.internal?timeout=10s - read tcp 10.0.11.1:44730->3.223.175.179:6443: read: connection reset by peer
2026-04-20T17:22:39Z node/ip-10-0-11-1.ec2.internal - reason/FailedToUpdateLease https://a3c6ff3fdb78f4928b419116d0110f73-90e235fd09beb5d3.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-1.ec2.internal?timeout=10s - read tcp 10.0.11.1:44698->3.223.175.179:6443: read: connection reset by peer
2026-04-20T17:22:49Z node/ip-10-0-11-1.ec2.internal - reason/FailedToUpdateLease https://a3c6ff3fdb78f4928b419116d0110f73-90e235fd09beb5d3.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-1.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046237046963441664junit2 hours ago
2026-04-20T17:43:43Z node/ip-10-0-6-112.ec2.internal - reason/FailedToUpdateLease https://a3c6ff3fdb78f4928b419116d0110f73-90e235fd09beb5d3.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-112.ec2.internal?timeout=10s - net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
2026-04-20T18:08:20Z node/ip-10-0-6-112.ec2.internal - reason/FailedToUpdateLease https://a3c6ff3fdb78f4928b419116d0110f73-90e235fd09beb5d3.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-112.ec2.internal?timeout=10s - read tcp 10.0.6.112:50614->52.4.202.201:6443: read: connection reset by peer
#2046237063702908928junit5 hours ago
# [sig-storage] In-tree Volumes [Driver: local] [LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
fail [k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:383]: pod Delete API error: Delete "https://a3074922348e340c1acc819f4c70c619-3791521c47c198e6.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/e2e-volumemode-6863/pods/pod-e9fc3a11-42d2-464a-8299-00f55bdb8df9": read tcp 10.131.29.134:47426->34.192.242.233:6443: read: connection reset by peer
#2046237058665549824junit5 hours ago
    <*url.Error | 0xc000afa420>:
    Post "https://afce69a6aa4a04c66b8735e39e4ee09a-a05ec32bb918af3e.elb.us-east-1.amazonaws.com:6443/apis/route.openshift.io/v1/namespaces/e2e-test-router-config-manager-mhh49/routes": read tcp 10.131.29.133:45248->98.83.251.37:6443: read: connection reset by peer
    {
#2046044784476295168junit17 hours ago
# [sig-node] [DRA] control plane must deallocate after use
fail [k8s.io/kubernetes/test/e2e/dra/utils/builder.go:482]: create *v1.DeviceClass: Post "https://a82f9da1b3a7d4bbfa5d054fefde3f4b-29f6894d4b504444.elb.us-east-1.amazonaws.com:6443/apis/resource.k8s.io/v1/deviceclasses": read tcp 10.129.244.186:39004->34.194.144.99:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.17-e2e-gcp-cilium (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046275385477304320junit2 hours ago
# [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
fail [k8s.io/kubernetes/test/e2e/apimachinery/crd_publish_openapi.go:225]: failed to wait for definition "com.example.crd-publish-openapi-test-unknown-at-root.v1.e2e-test-crd-publish-openapi-5228-crd" not to be served anymore: failed to wait for OpenAPI spec validating condition: read tcp 10.129.14.11:57522->35.190.8.56:6443: read: connection reset by peer; lastMsg:
Error: exit with code 1
periodic-ci-openshift-release-main-stable-4.y-e2e-aws-ovn-upgrade (all) - 8 runs, 50% failed, 25% of failures match = 13% impact
#2046255023301595136junit2 hours ago
2026-04-20T16:40:30Z node/ip-10-0-86-54.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-kldjh8wd-dcf9a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-86-54.ec2.internal?timeout=10s - read tcp 10.0.86.54:60862->10.0.57.123:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.15-e2e-gcp-ovn-mount-ns-hiding (all) - 1 runs, 0% failed, 100% of runs match
#2046275115359932416junit2 hours ago
# [sig-storage] CSI Mock volume fsgroup policies CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None [Suite:openshift/conformance/parallel] [Suite:k8s]
fail [k8s.io/kubernetes@v1.28.2/test/e2e/storage/csi_mock/csi_fsgroup_policy.go:109]: failed: deleting the directory: error sending request: Post "https://api.ci-op-q7mllyfk-6a275.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/e2e-csi-mock-volumes-fsgroup-policy-9784/pods/pvc-volume-tester-782ts/exec?command=%2Fbin%2Fsh&command=-c&command=rm+-fr+%2Fmnt%2Ftest%2Fe2e-csi-mock-volumes-fsgroup-policy-9784&container=volume-tester&container=volume-tester&stderr=true&stdout=true": read tcp 10.128.114.26:53276->35.224.129.31:6443: read: connection reset by peer: error sending request: Post "https://api.ci-op-q7mllyfk-6a275.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/e2e-csi-mock-volumes-fsgroup-policy-9784/pods/pvc-volume-tester-782ts/exec?command=%2Fbin%2Fsh&command=-c&command=rm+-fr+%2Fmnt%2Ftest%2Fe2e-csi-mock-volumes-fsgroup-policy-9784&container=volume-tester&container=volume-tester&stderr=true&stdout=true": read tcp 10.128.114.26:53276->35.224.129.31:6443: read: connection reset by peer
Ginkgo exit error 1: exit with code 1
periodic-ci-openshift-release-main-ci-4.15-e2e-gcp-sdn (all) - 1 runs, 0% failed, 100% of runs match
#2046275127938650112junit2 hours ago
stderr:
error: error sending request: Post "https://api.ci-op-zcglpbr7-20252.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/e2e-volume-3122/pods/local-client/exec?command=cat&command=%2Fopt%2F0%2Findex.html&container=local-client&stderr=true&stdout=true": read tcp 10.128.12.8:38460->34.122.151.124:6443: read: connection reset by peer
periodic-ci-openshift-multiarch-main-nightly-4.19-ocp-e2e-aws-ovn-multi-x-ax (all) - 10 runs, 20% failed, 150% of failures match = 30% impact
#2046241936439775232junit2 hours ago
2026-04-20T15:52:23Z node/ip-10-0-106-107.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-m16lmdlp-93e2c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-106-107.us-west-2.compute.internal?timeout=10s - read tcp 10.0.106.107:55074->10.0.77.86:6443: read: connection reset by peer
2026-04-20T15:52:27Z node/ip-10-0-93-55.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-m16lmdlp-93e2c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-93-55.us-west-2.compute.internal?timeout=10s - read tcp 10.0.93.55:46992->10.0.21.196:6443: read: connection reset by peer
#2045964043285434368junit22 hours ago
2026-04-19T21:35:12Z node/ip-10-0-41-64.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-pykwfkqw-93e2c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-41-64.ec2.internal?timeout=10s - read tcp 10.0.41.64:35352->10.0.92.106:6443: read: connection reset by peer
2026-04-19T21:39:20Z node/ip-10-0-15-170.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-pykwfkqw-93e2c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-15-170.ec2.internal?timeout=10s - read tcp 10.0.15.170:59238->10.0.61.242:6443: read: connection reset by peer
#2045848000919506944junit30 hours ago
2026-04-19T13:54:23Z node/ip-10-0-61-134.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-n5p1crcv-93e2c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-134.us-east-2.compute.internal?timeout=10s - read tcp 10.0.61.134:57920->10.0.78.93:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.19-e2e-aws-ovn-serial (all) - 11 runs, 18% failed, 450% of failures match = 82% impact
#2046254377252950016junit2 hours ago
2026-04-20T17:59:54Z node/ip-10-0-1-12.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-nf6qk4bv-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-1-12.ec2.internal?timeout=10s - context deadline exceeded
2026-04-20T18:34:30Z node/ip-10-0-46-228.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-nf6qk4bv-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-46-228.ec2.internal?timeout=10s - read tcp 10.0.46.228:50606->10.0.93.110:6443: read: connection reset by peer
2026-04-20T18:34:30Z node/ip-10-0-116-198.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-nf6qk4bv-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-116-198.ec2.internal?timeout=10s - read tcp 10.0.116.198:42648->10.0.93.110:6443: read: connection reset by peer
2026-04-20T18:38:29Z node/ip-10-0-107-58.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-nf6qk4bv-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-107-58.ec2.internal?timeout=10s - read tcp 10.0.107.58:35698->10.0.93.110:6443: read: connection reset by peer
2026-04-20T18:43:21Z node/ip-10-0-122-109.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-nf6qk4bv-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-122-109.ec2.internal?timeout=10s - read tcp 10.0.122.109:40076->10.0.14.97:6443: read: connection reset by peer
2026-04-20T18:50:57Z node/ip-10-0-1-12.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-nf6qk4bv-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-1-12.ec2.internal?timeout=10s - read tcp 10.0.1.12:56334->10.0.14.97:6443: read: connection reset by peer
2026-04-20T18:51:11Z node/ip-10-0-89-240.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-nf6qk4bv-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-240.ec2.internal?timeout=10s - read tcp 10.0.89.240:37432->10.0.93.110:6443: read: connection reset by peer
#2046155247289634816junit8 hours ago
2026-04-20T10:09:38Z node/ip-10-0-21-231.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-cf91t8mf-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-21-231.us-east-2.compute.internal?timeout=10s - read tcp 10.0.21.231:54592->10.0.123.182:6443: read: connection reset by peer
2026-04-20T10:09:43Z node/ip-10-0-98-58.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-cf91t8mf-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-98-58.us-east-2.compute.internal?timeout=10s - read tcp 10.0.98.58:45984->10.0.14.161:6443: read: connection reset by peer
2026-04-20T11:46:34Z node/ip-10-0-47-80.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-cf91t8mf-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-47-80.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T11:59:41Z node/ip-10-0-21-231.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-cf91t8mf-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-21-231.us-east-2.compute.internal?timeout=10s - read tcp 10.0.21.231:36836->10.0.14.161:6443: read: connection reset by peer
2026-04-20T11:59:52Z node/ip-10-0-26-192.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-cf91t8mf-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-192.us-east-2.compute.internal?timeout=10s - read tcp 10.0.26.192:32952->10.0.14.161:6443: read: connection reset by peer
2026-04-20T12:08:29Z node/ip-10-0-79-33.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-cf91t8mf-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-79-33.us-east-2.compute.internal?timeout=10s - read tcp 10.0.79.33:54970->10.0.123.182:6443: read: connection reset by peer
2026-04-20T12:12:08Z node/ip-10-0-26-192.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-cf91t8mf-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-192.us-east-2.compute.internal?timeout=10s - read tcp 10.0.26.192:33056->10.0.123.182:6443: read: connection reset by peer
2026-04-20T12:12:15Z node/ip-10-0-79-33.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-cf91t8mf-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-79-33.us-east-2.compute.internal?timeout=10s - read tcp 10.0.79.33:54134->10.0.123.182:6443: read: connection reset by peer
2026-04-20T12:12:16Z node/ip-10-0-98-58.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-cf91t8mf-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-98-58.us-east-2.compute.internal?timeout=10s - read tcp 10.0.98.58:57138->10.0.123.182:6443: read: connection reset by peer
2026-04-20T12:12:19Z node/ip-10-0-2-12.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-cf91t8mf-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-2-12.us-east-2.compute.internal?timeout=10s - read tcp 10.0.2.12:60876->10.0.14.161:6443: read: connection reset by peer
#2046045934894190592junit16 hours ago
2026-04-20T04:17:31Z node/ip-10-0-76-230.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-9wct2l15-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-76-230.us-east-2.compute.internal?timeout=10s - context deadline exceeded
2026-04-20T04:41:26Z node/ip-10-0-26-206.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-9wct2l15-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-206.us-east-2.compute.internal?timeout=10s - read tcp 10.0.26.206:44230->10.0.17.58:6443: read: connection reset by peer
2026-04-20T04:41:26Z node/ip-10-0-110-98.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-9wct2l15-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-110-98.us-east-2.compute.internal?timeout=10s - read tcp 10.0.110.98:43922->10.0.17.58:6443: read: connection reset by peer
2026-04-20T04:41:26Z node/ip-10-0-33-194.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-9wct2l15-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-33-194.us-east-2.compute.internal?timeout=10s - read tcp 10.0.33.194:34974->10.0.17.58:6443: read: connection reset by peer
2026-04-20T04:46:13Z node/ip-10-0-26-206.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-9wct2l15-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-206.us-east-2.compute.internal?timeout=10s - read tcp 10.0.26.206:45694->10.0.17.58:6443: read: connection reset by peer
2026-04-20T04:46:19Z node/ip-10-0-76-230.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-9wct2l15-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-76-230.us-east-2.compute.internal?timeout=10s - read tcp 10.0.76.230:47962->10.0.17.58:6443: read: connection reset by peer
2026-04-20T04:50:07Z node/ip-10-0-33-194.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-9wct2l15-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-33-194.us-east-2.compute.internal?timeout=10s - read tcp 10.0.33.194:46554->10.0.102.217:6443: read: connection reset by peer
2026-04-20T04:50:14Z node/ip-10-0-36-148.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-9wct2l15-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-148.us-east-2.compute.internal?timeout=10s - read tcp 10.0.36.148:45194->10.0.17.58:6443: read: connection reset by peer
2026-04-20T04:50:27Z node/ip-10-0-26-206.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-9wct2l15-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-206.us-east-2.compute.internal?timeout=10s - read tcp 10.0.26.206:60836->10.0.102.217:6443: read: connection reset by peer
#2045982156982849536junit20 hours ago
2026-04-20T00:10:06Z node/ip-10-0-105-34.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x9csxc95-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-105-34.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T00:47:28Z node/ip-10-0-46-87.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x9csxc95-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-46-87.us-west-1.compute.internal?timeout=10s - read tcp 10.0.46.87:36286->10.0.82.108:6443: read: connection reset by peer
2026-04-20T00:50:50Z node/ip-10-0-121-72.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x9csxc95-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-121-72.us-west-1.compute.internal?timeout=10s - read tcp 10.0.121.72:51846->10.0.31.80:6443: read: connection reset by peer
2026-04-20T00:51:01Z node/ip-10-0-46-87.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x9csxc95-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-46-87.us-west-1.compute.internal?timeout=10s - read tcp 10.0.46.87:56496->10.0.31.80:6443: read: connection reset by peer
2026-04-20T00:54:55Z node/ip-10-0-117-35.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x9csxc95-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-117-35.us-west-1.compute.internal?timeout=10s - read tcp 10.0.117.35:35286->10.0.82.108:6443: read: connection reset by peer
2026-04-20T00:54:56Z node/ip-10-0-121-72.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x9csxc95-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-121-72.us-west-1.compute.internal?timeout=10s - read tcp 10.0.121.72:35516->10.0.82.108:6443: read: connection reset by peer
2026-04-20T00:58:50Z node/ip-10-0-117-35.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x9csxc95-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-117-35.us-west-1.compute.internal?timeout=10s - read tcp 10.0.117.35:60874->10.0.31.80:6443: read: connection reset by peer
2026-04-20T00:58:51Z node/ip-10-0-46-87.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x9csxc95-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-46-87.us-west-1.compute.internal?timeout=10s - read tcp 10.0.46.87:46844->10.0.82.108:6443: read: connection reset by peer
2026-04-20T01:02:25Z node/ip-10-0-121-72.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x9csxc95-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-121-72.us-west-1.compute.internal?timeout=10s - read tcp 10.0.121.72:47540->10.0.31.80:6443: read: connection reset by peer
2026-04-20T01:02:30Z node/ip-10-0-104-244.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x9csxc95-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-104-244.us-west-1.compute.internal?timeout=10s - read tcp 10.0.104.244:51296->10.0.82.108:6443: read: connection reset by peer
#2045916656810594304junit25 hours ago
Apr 19 19:35:44.806 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib: failed to apply / update (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib: Patch "https://api-int.ci-op-b1spwkbs-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps/ovnkube-script-lib?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.96.20:34542->10.0.74.228:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 19:35:44.806 - 28s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib: failed to apply / update (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib: Patch "https://api-int.ci-op-b1spwkbs-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps/ovnkube-script-lib?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.96.20:34542->10.0.74.228:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 19:36:13.018 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
Apr 19 19:39:40.545 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics: Patch "https://api-int.ci-op-b1spwkbs-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/network-diagnostics?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.96.20:42756->10.0.10.8:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 19:39:40.545 - 28s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics: Patch "https://api-int.ci-op-b1spwkbs-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/network-diagnostics?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.96.20:42756->10.0.10.8:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 19:40:08.750 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
Apr 19 19:43:36.232 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity: Patch "https://api-int.ci-op-b1spwkbs-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/network-node-identity?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.96.20:40540->10.0.74.228:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 19:43:36.232 - 28s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity: Patch "https://api-int.ci-op-b1spwkbs-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/network-node-identity?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.96.20:40540->10.0.74.228:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 19:44:04.467 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2045834615179972608junit30 hours ago
2026-04-19T12:49:46Z node/ip-10-0-56-138.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-c3h73g9i-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-56-138.us-west-1.compute.internal?timeout=10s - read tcp 10.0.56.138:45194->10.0.124.183:6443: read: connection reset by peer
2026-04-19T14:18:58Z node/ip-10-0-3-46.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-c3h73g9i-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-3-46.us-west-1.compute.internal?timeout=10s - context deadline exceeded
2026-04-19T14:18:58Z node/ip-10-0-58-85.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-c3h73g9i-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-85.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T14:48:15Z node/ip-10-0-3-46.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-c3h73g9i-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-3-46.us-west-1.compute.internal?timeout=10s - read tcp 10.0.3.46:35476->10.0.124.183:6443: read: connection reset by peer
2026-04-19T14:52:37Z node/ip-10-0-56-138.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-c3h73g9i-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-56-138.us-west-1.compute.internal?timeout=10s - read tcp 10.0.56.138:50566->10.0.41.109:6443: read: connection reset by peer
2026-04-19T14:56:20Z node/ip-10-0-113-228.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-c3h73g9i-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-113-228.us-west-1.compute.internal?timeout=10s - read tcp 10.0.113.228:33060->10.0.124.183:6443: read: connection reset by peer
#2045741308491862016junit36 hours ago
Apr 19 08:04:00.849 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ServiceAccount) openshift-multus/multus-ac: failed to apply / update (/v1, Kind=ServiceAccount) openshift-multus/multus-ac: Patch "https://api-int.ci-op-95c0h9vp-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ac?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.126.57:37218->10.0.57.125:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 08:04:00.849 - 28s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ServiceAccount) openshift-multus/multus-ac: failed to apply / update (/v1, Kind=ServiceAccount) openshift-multus/multus-ac: Patch "https://api-int.ci-op-95c0h9vp-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ac?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.126.57:37218->10.0.57.125:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 08:04:29.048 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
Apr 19 08:11:23.995 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org: failed to apply / update (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org: Patch "https://api-int.ci-op-95c0h9vp-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/egressservices.k8s.ovn.org?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.126.57:51548->10.0.73.252:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 08:11:23.995 - 28s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org: failed to apply / update (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org: Patch "https://api-int.ci-op-95c0h9vp-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/egressservices.k8s.ovn.org?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.126.57:51548->10.0.73.252:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 08:11:52.204 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2045741308491862016junit36 hours ago
2026-04-19T08:07:06Z node/ip-10-0-75-243.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-95c0h9vp-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-75-243.ec2.internal?timeout=10s - read tcp 10.0.75.243:34880->10.0.57.125:6443: read: connection reset by peer
2026-04-19T08:11:08Z node/ip-10-0-126-57.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-95c0h9vp-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-126-57.ec2.internal?timeout=10s - read tcp 10.0.126.57:56646->10.0.73.252:6443: read: connection reset by peer
2026-04-19T08:11:13Z node/ip-10-0-5-84.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-95c0h9vp-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-84.ec2.internal?timeout=10s - read tcp 10.0.5.84:39202->10.0.57.125:6443: read: connection reset by peer
2026-04-19T08:11:17Z node/ip-10-0-123-198.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-95c0h9vp-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-123-198.ec2.internal?timeout=10s - read tcp 10.0.123.198:50268->10.0.57.125:6443: read: connection reset by peer
2026-04-19T08:15:45Z node/ip-10-0-75-243.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-95c0h9vp-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-75-243.ec2.internal?timeout=10s - read tcp 10.0.75.243:42740->10.0.73.252:6443: read: connection reset by peer
2026-04-19T08:15:49Z node/ip-10-0-5-84.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-95c0h9vp-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-84.ec2.internal?timeout=10s - read tcp 10.0.5.84:54632->10.0.73.252:6443: read: connection reset by peer
2026-04-19T08:19:47Z node/ip-10-0-123-198.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-95c0h9vp-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-123-198.ec2.internal?timeout=10s - read tcp 10.0.123.198:54404->10.0.57.125:6443: read: connection reset by peer
2026-04-19T08:23:39Z node/ip-10-0-32-236.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-95c0h9vp-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-32-236.ec2.internal?timeout=10s - read tcp 10.0.32.236:49900->10.0.57.125:6443: read: connection reset by peer
2026-04-19T08:44:41Z node/ip-10-0-75-243.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-95c0h9vp-f90cf.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-75-243.ec2.internal?timeout=10s - context deadline exceeded
#2045646784889360384junit42 hours ago
2026-04-19T00:20:50Z node/ip-10-0-65-81.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zdzcsj1n-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-65-81.us-west-1.compute.internal?timeout=10s - read tcp 10.0.65.81:34632->10.0.97.27:6443: read: connection reset by peer
2026-04-19T00:20:51Z node/ip-10-0-106-164.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zdzcsj1n-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-106-164.us-west-1.compute.internal?timeout=10s - read tcp 10.0.106.164:49706->10.0.97.27:6443: read: connection reset by peer
2026-04-19T01:53:44Z node/ip-10-0-65-81.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zdzcsj1n-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-65-81.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045646784889360384junit42 hours ago
2026-04-19T01:53:45Z node/ip-10-0-51-224.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zdzcsj1n-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-224.us-west-1.compute.internal?timeout=10s - context deadline exceeded
2026-04-19T02:32:17Z node/ip-10-0-126-232.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zdzcsj1n-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-126-232.us-west-1.compute.internal?timeout=10s - read tcp 10.0.126.232:33768->10.0.97.27:6443: read: connection reset by peer
2026-04-19T02:36:36Z node/ip-10-0-65-81.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zdzcsj1n-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-65-81.us-west-1.compute.internal?timeout=10s - read tcp 10.0.65.81:55908->10.0.46.181:6443: read: connection reset by peer
2026-04-19T02:39:57Z node/ip-10-0-15-40.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zdzcsj1n-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-15-40.us-west-1.compute.internal?timeout=10s - read tcp 10.0.15.40:52462->10.0.46.181:6443: read: connection reset by peer
2026-04-19T02:44:54Z node/ip-10-0-126-232.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zdzcsj1n-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-126-232.us-west-1.compute.internal?timeout=10s - read tcp 10.0.126.232:59428->10.0.97.27:6443: read: connection reset by peer
2026-04-19T02:52:41Z node/ip-10-0-106-164.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zdzcsj1n-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-106-164.us-west-1.compute.internal?timeout=10s - read tcp 10.0.106.164:50152->10.0.46.181:6443: read: connection reset by peer
2026-04-19T02:52:58Z node/ip-10-0-65-81.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zdzcsj1n-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-65-81.us-west-1.compute.internal?timeout=10s - read tcp 10.0.65.81:43254->10.0.97.27:6443: read: connection reset by peer
2026-04-19T02:53:13Z node/ip-10-0-15-40.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zdzcsj1n-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-15-40.us-west-1.compute.internal?timeout=10s - read tcp 10.0.15.40:50424->10.0.97.27:6443: read: connection reset by peer
#2045580523434151936junit46 hours ago
2026-04-18T21:46:14Z node/ip-10-0-121-14.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3jlv5h2b-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-121-14.us-west-1.compute.internal?timeout=10s - read tcp 10.0.121.14:34024->10.0.31.225:6443: read: connection reset by peer
2026-04-18T21:50:50Z node/ip-10-0-34-54.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3jlv5h2b-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-34-54.us-west-1.compute.internal?timeout=10s - read tcp 10.0.34.54:34598->10.0.82.220:6443: read: connection reset by peer
2026-04-18T21:50:51Z node/ip-10-0-62-51.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3jlv5h2b-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-51.us-west-1.compute.internal?timeout=10s - read tcp 10.0.62.51:39494->10.0.82.220:6443: read: connection reset by peer
2026-04-18T21:50:57Z node/ip-10-0-7-114.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3jlv5h2b-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-7-114.us-west-1.compute.internal?timeout=10s - read tcp 10.0.7.114:54034->10.0.31.225:6443: read: connection reset by peer
2026-04-18T21:51:02Z node/ip-10-0-42-153.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3jlv5h2b-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-42-153.us-west-1.compute.internal?timeout=10s - read tcp 10.0.42.153:56572->10.0.82.220:6443: read: connection reset by peer
2026-04-18T21:54:45Z node/ip-10-0-121-14.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3jlv5h2b-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-121-14.us-west-1.compute.internal?timeout=10s - read tcp 10.0.121.14:57870->10.0.31.225:6443: read: connection reset by peer
2026-04-18T21:54:57Z node/ip-10-0-42-153.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3jlv5h2b-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-42-153.us-west-1.compute.internal?timeout=10s - read tcp 10.0.42.153:53032->10.0.31.225:6443: read: connection reset by peer
2026-04-18T21:55:23Z node/ip-10-0-7-114.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3jlv5h2b-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-7-114.us-west-1.compute.internal?timeout=10s - read tcp 10.0.7.114:57266->10.0.31.225:6443: read: connection reset by peer
2026-04-18T21:58:57Z node/ip-10-0-7-114.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3jlv5h2b-f90cf.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-7-114.us-west-1.compute.internal?timeout=10s - read tcp 10.0.7.114:53266->10.0.82.220:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.19-upgrade-from-stable-4.18-e2e-aws-ovn-upgrade (all) - 7 runs, 14% failed, 700% of failures match = 100% impact
#2046254479623327744junit2 hours ago
2026-04-20T16:58:40Z node/ip-10-0-39-129.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3vyfxs6g-3d54f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-129.us-east-2.compute.internal?timeout=10s - read tcp 10.0.39.129:43006->10.0.116.208:6443: read: connection reset by peer
2026-04-20T17:02:14Z node/ip-10-0-72-134.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3vyfxs6g-3d54f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-72-134.us-east-2.compute.internal?timeout=10s - read tcp 10.0.72.134:42040->10.0.116.208:6443: read: connection reset by peer
2026-04-20T17:02:15Z node/ip-10-0-39-129.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3vyfxs6g-3d54f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-129.us-east-2.compute.internal?timeout=10s - read tcp 10.0.39.129:49654->10.0.60.203:6443: read: connection reset by peer
2026-04-20T17:02:17Z node/ip-10-0-61-105.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3vyfxs6g-3d54f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-105.us-east-2.compute.internal?timeout=10s - read tcp 10.0.61.105:55422->10.0.116.208:6443: read: connection reset by peer
2026-04-20T17:41:45Z node/ip-10-0-88-241.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3vyfxs6g-3d54f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-88-241.us-east-2.compute.internal?timeout=10s - read tcp 10.0.88.241:36388->10.0.116.208:6443: read: connection reset by peer
2026-04-20T17:57:53Z node/ip-10-0-122-171.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3vyfxs6g-3d54f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-122-171.us-east-2.compute.internal?timeout=10s - read tcp 10.0.122.171:35052->10.0.60.203:6443: read: connection reset by peer
#2046114015716839424junit11 hours ago
2026-04-20T07:18:49Z node/ip-10-0-97-93.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pngkzmwd-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-97-93.us-west-2.compute.internal?timeout=10s - read tcp 10.0.97.93:33292->10.0.41.181:6443: read: connection reset by peer
2026-04-20T07:18:54Z node/ip-10-0-120-177.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pngkzmwd-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-177.us-west-2.compute.internal?timeout=10s - read tcp 10.0.120.177:48354->10.0.81.7:6443: read: connection reset by peer
2026-04-20T07:19:11Z node/ip-10-0-111-180.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pngkzmwd-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-111-180.us-west-2.compute.internal?timeout=10s - read tcp 10.0.111.180:51754->10.0.81.7:6443: read: connection reset by peer
2026-04-20T07:35:08Z node/ip-10-0-97-93.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pngkzmwd-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-97-93.us-west-2.compute.internal?timeout=10s - read tcp 10.0.97.93:55638->10.0.41.181:6443: read: connection reset by peer
2026-04-20T07:35:08Z node/ip-10-0-35-40.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pngkzmwd-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-40.us-west-2.compute.internal?timeout=10s - read tcp 10.0.35.40:44640->10.0.81.7:6443: read: connection reset by peer
2026-04-20T07:39:35Z node/ip-10-0-108-149.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pngkzmwd-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-108-149.us-west-2.compute.internal?timeout=10s - read tcp 10.0.108.149:42018->10.0.81.7:6443: read: connection reset by peer
2026-04-20T07:43:22Z node/ip-10-0-120-177.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pngkzmwd-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-177.us-west-2.compute.internal?timeout=10s - read tcp 10.0.120.177:47786->10.0.81.7:6443: read: connection reset by peer
2026-04-20T08:21:46Z node/ip-10-0-108-149.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pngkzmwd-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-108-149.us-west-2.compute.internal?timeout=10s - read tcp 10.0.108.149:59480->10.0.41.181:6443: read: connection reset by peer
2026-04-20T08:29:37Z node/ip-10-0-111-180.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pngkzmwd-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-111-180.us-west-2.compute.internal?timeout=10s - read tcp 10.0.111.180:46948->10.0.81.7:6443: read: connection reset by peer
2026-04-20T08:29:53Z node/ip-10-0-35-40.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pngkzmwd-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-40.us-west-2.compute.internal?timeout=10s - read tcp 10.0.35.40:47684->10.0.81.7:6443: read: connection reset by peer
2026-04-20T08:42:02Z node/ip-10-0-108-149.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pngkzmwd-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-108-149.us-west-2.compute.internal?timeout=10s - read tcp 10.0.108.149:33892->10.0.41.181:6443: read: connection reset by peer
2026-04-20T08:42:45Z node/ip-10-0-97-93.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pngkzmwd-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-97-93.us-west-2.compute.internal?timeout=10s - read tcp 10.0.97.93:53334->10.0.81.7:6443: read: connection reset by peer
#2045975469202870272junit20 hours ago
2026-04-19T22:09:36Z node/ip-10-0-25-162.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-hpj2hp1h-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-162.ec2.internal?timeout=10s - read tcp 10.0.25.162:41720->10.0.72.1:6443: read: connection reset by peer
2026-04-19T22:10:03Z node/ip-10-0-80-68.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-hpj2hp1h-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-80-68.ec2.internal?timeout=10s - read tcp 10.0.80.68:42516->10.0.72.1:6443: read: connection reset by peer
2026-04-19T22:10:07Z node/ip-10-0-25-162.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-hpj2hp1h-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-162.ec2.internal?timeout=10s - read tcp 10.0.25.162:41802->10.0.72.1:6443: read: connection reset by peer
2026-04-19T22:29:43Z node/ip-10-0-98-134.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-hpj2hp1h-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-98-134.ec2.internal?timeout=10s - read tcp 10.0.98.134:53640->10.0.21.201:6443: read: connection reset by peer
2026-04-19T22:33:25Z node/ip-10-0-58-197.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-hpj2hp1h-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-197.ec2.internal?timeout=10s - read tcp 10.0.58.197:34380->10.0.21.201:6443: read: connection reset by peer
2026-04-19T22:33:33Z node/ip-10-0-80-68.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-hpj2hp1h-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-80-68.ec2.internal?timeout=10s - read tcp 10.0.80.68:35268->10.0.21.201:6443: read: connection reset by peer
2026-04-19T22:33:38Z node/ip-10-0-98-134.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-hpj2hp1h-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-98-134.ec2.internal?timeout=10s - read tcp 10.0.98.134:35148->10.0.21.201:6443: read: connection reset by peer
2026-04-19T22:37:30Z node/ip-10-0-109-220.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-hpj2hp1h-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-220.ec2.internal?timeout=10s - read tcp 10.0.109.220:55932->10.0.72.1:6443: read: connection reset by peer
2026-04-19T22:37:31Z node/ip-10-0-118-30.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-hpj2hp1h-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-118-30.ec2.internal?timeout=10s - read tcp 10.0.118.30:33374->10.0.72.1:6443: read: connection reset by peer
2026-04-19T23:31:19Z node/ip-10-0-118-30.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-hpj2hp1h-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-118-30.ec2.internal?timeout=10s - read tcp 10.0.118.30:59694->10.0.72.1:6443: read: connection reset by peer
2026-04-19T23:31:43Z node/ip-10-0-58-197.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-hpj2hp1h-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-197.ec2.internal?timeout=10s - read tcp 10.0.58.197:39538->10.0.21.201:6443: read: connection reset by peer
#2045909551596703744junit25 hours ago
2026-04-19T17:47:19Z node/ip-10-0-10-216.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rxh8pf7r-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-10-216.us-west-2.compute.internal?timeout=10s - read tcp 10.0.10.216:49578->10.0.107.54:6443: read: connection reset by peer
2026-04-19T18:07:19Z node/ip-10-0-34-104.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rxh8pf7r-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-34-104.us-west-2.compute.internal?timeout=10s - read tcp 10.0.34.104:51376->10.0.4.4:6443: read: connection reset by peer
2026-04-19T18:07:19Z node/ip-10-0-67-249.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rxh8pf7r-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-67-249.us-west-2.compute.internal?timeout=10s - read tcp 10.0.67.249:40034->10.0.107.54:6443: read: connection reset by peer
2026-04-19T18:11:28Z node/ip-10-0-10-216.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rxh8pf7r-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-10-216.us-west-2.compute.internal?timeout=10s - read tcp 10.0.10.216:36334->10.0.4.4:6443: read: connection reset by peer
2026-04-19T18:11:44Z node/ip-10-0-34-104.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rxh8pf7r-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-34-104.us-west-2.compute.internal?timeout=10s - read tcp 10.0.34.104:59096->10.0.4.4:6443: read: connection reset by peer
2026-04-19T19:06:10Z node/ip-10-0-14-6.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rxh8pf7r-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-14-6.us-west-2.compute.internal?timeout=10s - read tcp 10.0.14.6:58820->10.0.4.4:6443: read: connection reset by peer
2026-04-19T19:06:22Z node/ip-10-0-67-249.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rxh8pf7r-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-67-249.us-west-2.compute.internal?timeout=10s - read tcp 10.0.67.249:39372->10.0.4.4:6443: read: connection reset by peer
#2045853157740777472junit29 hours ago
2026-04-19T14:01:43Z node/ip-10-0-7-231.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-pmysb7px-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-7-231.ec2.internal?timeout=10s - read tcp 10.0.7.231:54888->10.0.115.213:6443: read: connection reset by peer
2026-04-19T14:01:51Z node/ip-10-0-9-52.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-pmysb7px-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-52.ec2.internal?timeout=10s - read tcp 10.0.9.52:46330->10.0.115.213:6443: read: connection reset by peer
2026-04-19T14:21:59Z node/ip-10-0-115-126.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-pmysb7px-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-115-126.ec2.internal?timeout=10s - read tcp 10.0.115.126:54658->10.0.115.213:6443: read: connection reset by peer
2026-04-19T14:22:07Z node/ip-10-0-23-207.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-pmysb7px-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-207.ec2.internal?timeout=10s - read tcp 10.0.23.207:60586->10.0.16.63:6443: read: connection reset by peer
2026-04-19T14:25:58Z node/ip-10-0-117-161.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-pmysb7px-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-117-161.ec2.internal?timeout=10s - read tcp 10.0.117.161:45064->10.0.16.63:6443: read: connection reset by peer
2026-04-19T14:26:01Z node/ip-10-0-23-207.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-pmysb7px-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-207.ec2.internal?timeout=10s - read tcp 10.0.23.207:36206->10.0.16.63:6443: read: connection reset by peer
2026-04-19T15:23:20Z node/ip-10-0-117-161.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-pmysb7px-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-117-161.ec2.internal?timeout=10s - read tcp 10.0.117.161:35588->10.0.16.63:6443: read: connection reset by peer
2026-04-19T15:23:22Z node/ip-10-0-7-231.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-pmysb7px-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-7-231.ec2.internal?timeout=10s - read tcp 10.0.7.231:56606->10.0.16.63:6443: read: connection reset by peer
2026-04-19T15:23:24Z node/ip-10-0-9-52.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-pmysb7px-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-52.ec2.internal?timeout=10s - read tcp 10.0.9.52:37924->10.0.16.63:6443: read: connection reset by peer
#2045724352896307200junit37 hours ago
2026-04-19T05:42:21Z node/ip-10-0-8-185.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-6byg8g37-3d54f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-185.ec2.internal?timeout=10s - read tcp 10.0.8.185:47232->10.0.22.178:6443: read: connection reset by peer
2026-04-19T05:42:21Z node/ip-10-0-56-159.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-6byg8g37-3d54f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-56-159.ec2.internal?timeout=10s - read tcp 10.0.56.159:42376->10.0.22.178:6443: read: connection reset by peer
2026-04-19T05:46:04Z node/ip-10-0-73-139.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-6byg8g37-3d54f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-73-139.ec2.internal?timeout=10s - read tcp 10.0.73.139:41696->10.0.22.178:6443: read: connection reset by peer
2026-04-19T05:46:14Z node/ip-10-0-73-139.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-6byg8g37-3d54f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-73-139.ec2.internal?timeout=10s - read tcp 10.0.73.139:45000->10.0.108.128:6443: read: connection reset by peer
2026-04-19T05:46:21Z node/ip-10-0-34-222.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-6byg8g37-3d54f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-34-222.ec2.internal?timeout=10s - read tcp 10.0.34.222:46062->10.0.22.178:6443: read: connection reset by peer
2026-04-19T05:50:00Z node/ip-10-0-8-185.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-6byg8g37-3d54f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-185.ec2.internal?timeout=10s - read tcp 10.0.8.185:47846->10.0.22.178:6443: read: connection reset by peer
2026-04-19T05:50:06Z node/ip-10-0-107-111.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-6byg8g37-3d54f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-107-111.ec2.internal?timeout=10s - read tcp 10.0.107.111:55256->10.0.108.128:6443: read: connection reset by peer
2026-04-19T06:38:20Z node/ip-10-0-0-42.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-6byg8g37-3d54f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-42.ec2.internal?timeout=10s - read tcp 10.0.0.42:58974->10.0.108.128:6443: read: connection reset by peer
2026-04-19T06:46:13Z node/ip-10-0-107-111.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-6byg8g37-3d54f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-107-111.ec2.internal?timeout=10s - read tcp 10.0.107.111:44468->10.0.22.178:6443: read: connection reset by peer
#2045632912644116480junit43 hours ago
2026-04-18T23:26:56Z node/ip-10-0-48-91.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-f8hvxf4z-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-48-91.ec2.internal?timeout=10s - read tcp 10.0.48.91:49866->10.0.69.223:6443: read: connection reset by peer
2026-04-18T23:54:43Z node/ip-10-0-92-21.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-f8hvxf4z-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-92-21.ec2.internal?timeout=10s - read tcp 10.0.92.21:59902->10.0.69.223:6443: read: connection reset by peer
2026-04-18T23:54:46Z node/ip-10-0-48-91.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-f8hvxf4z-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-48-91.ec2.internal?timeout=10s - read tcp 10.0.48.91:49264->10.0.69.223:6443: read: connection reset by peer
2026-04-18T23:54:47Z node/ip-10-0-95-55.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-f8hvxf4z-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-95-55.ec2.internal?timeout=10s - read tcp 10.0.95.55:40986->10.0.69.223:6443: read: connection reset by peer
2026-04-18T23:54:56Z node/ip-10-0-48-91.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-f8hvxf4z-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-48-91.ec2.internal?timeout=10s - read tcp 10.0.48.91:49266->10.0.69.223:6443: read: connection reset by peer
2026-04-18T23:58:46Z node/ip-10-0-101-10.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-f8hvxf4z-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-101-10.ec2.internal?timeout=10s - read tcp 10.0.101.10:40158->10.0.54.183:6443: read: connection reset by peer
2026-04-18T23:58:47Z node/ip-10-0-117-100.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-f8hvxf4z-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-117-100.ec2.internal?timeout=10s - read tcp 10.0.117.100:57036->10.0.54.183:6443: read: connection reset by peer
2026-04-19T00:46:17Z node/ip-10-0-117-100.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-f8hvxf4z-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-117-100.ec2.internal?timeout=10s - read tcp 10.0.117.100:55732->10.0.69.223:6443: read: connection reset by peer
2026-04-19T00:54:18Z node/ip-10-0-23-213.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-f8hvxf4z-3d54f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-213.ec2.internal?timeout=10s - read tcp 10.0.23.213:49762->10.0.69.223:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.20-aws-ovn-network-flow-matrix (all) - 2 runs, 0% failed, 100% of runs match
#2046242712377626624junit2 hours ago
2026-04-20T15:51:12Z node/ip-10-0-26-43.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-747ibbk2-a9cb9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-43.us-west-1.compute.internal?timeout=10s - read tcp 10.0.26.43:38966->10.0.126.250:6443: read: connection reset by peer
2026-04-20T15:51:22Z node/ip-10-0-0-245.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-747ibbk2-a9cb9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-245.us-west-1.compute.internal?timeout=10s - read tcp 10.0.0.245:54612->10.0.126.250:6443: read: connection reset by peer
2026-04-20T15:51:25Z node/ip-10-0-116-30.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-747ibbk2-a9cb9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-116-30.us-west-1.compute.internal?timeout=10s - read tcp 10.0.116.30:40916->10.0.126.250:6443: read: connection reset by peer
2026-04-20T15:51:46Z node/ip-10-0-24-9.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-747ibbk2-a9cb9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-9.us-west-1.compute.internal?timeout=10s - read tcp 10.0.24.9:38310->10.0.126.250:6443: read: connection reset by peer
2026-04-20T17:41:10Z node/ip-10-0-24-9.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-747ibbk2-a9cb9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-9.us-west-1.compute.internal?timeout=10s - read tcp 10.0.24.9:57878->10.0.50.226:6443: read: connection reset by peer
2026-04-20T17:46:00Z node/ip-10-0-26-43.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-747ibbk2-a9cb9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-43.us-west-1.compute.internal?timeout=10s - read tcp 10.0.26.43:36762->10.0.126.250:6443: read: connection reset by peer
2026-04-20T17:46:02Z node/ip-10-0-83-246.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-747ibbk2-a9cb9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-83-246.us-west-1.compute.internal?timeout=10s - read tcp 10.0.83.246:58280->10.0.126.250:6443: read: connection reset by peer
2026-04-20T17:46:02Z node/ip-10-0-116-30.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-747ibbk2-a9cb9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-116-30.us-west-1.compute.internal?timeout=10s - read tcp 10.0.116.30:42910->10.0.50.226:6443: read: connection reset by peer
2026-04-20T17:46:06Z node/ip-10-0-24-9.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-747ibbk2-a9cb9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-9.us-west-1.compute.internal?timeout=10s - read tcp 10.0.24.9:53504->10.0.50.226:6443: read: connection reset by peer
2026-04-20T17:49:45Z node/ip-10-0-0-214.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-747ibbk2-a9cb9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-214.us-west-1.compute.internal?timeout=10s - read tcp 10.0.0.214:47334->10.0.126.250:6443: read: connection reset by peer
2026-04-20T17:49:45Z node/ip-10-0-0-214.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-747ibbk2-a9cb9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-214.us-west-1.compute.internal?timeout=10s - read tcp 10.0.0.214:47440->10.0.126.250:6443: read: connection reset by peer
2026-04-20T17:53:41Z node/ip-10-0-116-30.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-747ibbk2-a9cb9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-116-30.us-west-1.compute.internal?timeout=10s - read tcp 10.0.116.30:37734->10.0.126.250:6443: read: connection reset by peer
2026-04-20T17:53:51Z node/ip-10-0-0-245.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-747ibbk2-a9cb9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-245.us-west-1.compute.internal?timeout=10s - read tcp 10.0.0.245:45182->10.0.50.226:6443: read: connection reset by peer
2026-04-20T18:50:50Z node/ip-10-0-26-43.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-747ibbk2-a9cb9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-43.us-west-1.compute.internal?timeout=10s - context deadline exceeded
#2045880193150619648junit26 hours ago
2026-04-19T17:28:32Z node/ip-10-0-17-63.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-12hwx2rn-a9cb9.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-17-63.us-west-1.compute.internal?timeout=10s - read tcp 10.0.17.63:51030->10.0.121.98:6443: read: connection reset by peer
2026-04-19T17:32:27Z node/ip-10-0-127-146.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-12hwx2rn-a9cb9.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-146.us-west-1.compute.internal?timeout=10s - read tcp 10.0.127.146:41830->10.0.52.157:6443: read: connection reset by peer
2026-04-19T17:32:27Z node/ip-10-0-17-63.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-12hwx2rn-a9cb9.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-17-63.us-west-1.compute.internal?timeout=10s - read tcp 10.0.17.63:54906->10.0.121.98:6443: read: connection reset by peer
2026-04-19T17:36:30Z node/ip-10-0-44-231.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-12hwx2rn-a9cb9.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-44-231.us-west-1.compute.internal?timeout=10s - read tcp 10.0.44.231:54630->10.0.121.98:6443: read: connection reset by peer
2026-04-19T17:41:11Z node/ip-10-0-88-69.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-12hwx2rn-a9cb9.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-88-69.us-west-1.compute.internal?timeout=10s - read tcp 10.0.88.69:53446->10.0.121.98:6443: read: connection reset by peer
2026-04-19T17:41:14Z node/ip-10-0-70-176.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-12hwx2rn-a9cb9.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-70-176.us-west-1.compute.internal?timeout=10s - read tcp 10.0.70.176:51424->10.0.52.157:6443: read: connection reset by peer
2026-04-19T17:44:59Z node/ip-10-0-103-229.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-12hwx2rn-a9cb9.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-229.us-west-1.compute.internal?timeout=10s - read tcp 10.0.103.229:33052->10.0.121.98:6443: read: connection reset by peer
2026-04-19T17:45:00Z node/ip-10-0-44-231.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-12hwx2rn-a9cb9.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-44-231.us-west-1.compute.internal?timeout=10s - read tcp 10.0.44.231:58688->10.0.52.157:6443: read: connection reset by peer
2026-04-19T17:45:01Z node/ip-10-0-17-63.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-12hwx2rn-a9cb9.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-17-63.us-west-1.compute.internal?timeout=10s - read tcp 10.0.17.63:47658->10.0.52.157:6443: read: connection reset by peer
2026-04-19T17:48:56Z node/ip-10-0-44-231.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-12hwx2rn-a9cb9.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-44-231.us-west-1.compute.internal?timeout=10s - read tcp 10.0.44.231:49842->10.0.52.157:6443: read: connection reset by peer
2026-04-19T17:48:56Z node/ip-10-0-127-146.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-12hwx2rn-a9cb9.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-146.us-west-1.compute.internal?timeout=10s - read tcp 10.0.127.146:56790->10.0.121.98:6443: read: connection reset by peer
2026-04-19T18:05:04Z node/ip-10-0-127-146.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-12hwx2rn-a9cb9.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-146.us-west-1.compute.internal?timeout=10s - context deadline exceeded
periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-upgrade (all) - 7 runs, 14% failed, 700% of failures match = 100% impact
#2046237080488513536junit2 hours ago
2026-04-20T17:15:53Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - write tcp 192.168.111.20:55842->192.168.111.5:6443: write: connection reset by peer
2026-04-20T17:15:59Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:59896->192.168.111.5:6443: read: connection reset by peer
2026-04-20T17:16:00Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:50710->192.168.111.5:6443: read: connection reset by peer
2026-04-20T17:16:01Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:40408->192.168.111.5:6443: read: connection reset by peer
#2046237080488513536junit2 hours ago
2026-04-20T17:15:53Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - write tcp 192.168.111.20:55842->192.168.111.5:6443: write: connection reset by peer
2026-04-20T17:15:59Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:59896->192.168.111.5:6443: read: connection reset by peer
2026-04-20T17:16:00Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:50710->192.168.111.5:6443: read: connection reset by peer
2026-04-20T17:16:01Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:40408->192.168.111.5:6443: read: connection reset by peer
#2046144469568327680junit8 hours ago
2026-04-20T11:03:27Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:45462->192.168.111.5:6443: read: connection reset by peer
2026-04-20T11:03:29Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:50622->192.168.111.5:6443: read: connection reset by peer
2026-04-20T11:03:30Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:37218->192.168.111.5:6443: read: connection reset by peer
2026-04-20T11:03:30Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:43158->192.168.111.5:6443: read: connection reset by peer
2026-04-20T11:03:33Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:40084->192.168.111.5:6443: read: connection reset by peer
#2046144469568327680junit8 hours ago
2026-04-20T11:03:27Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:45462->192.168.111.5:6443: read: connection reset by peer
2026-04-20T11:03:29Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:50622->192.168.111.5:6443: read: connection reset by peer
2026-04-20T11:03:30Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:37218->192.168.111.5:6443: read: connection reset by peer
2026-04-20T11:03:30Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:43158->192.168.111.5:6443: read: connection reset by peer
2026-04-20T11:03:33Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:40084->192.168.111.5:6443: read: connection reset by peer
#2046044917897105408junit15 hours ago
2026-04-20T04:37:59Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:60360->192.168.111.5:6443: read: connection reset by peer
2026-04-20T04:38:00Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:53736->192.168.111.5:6443: read: connection reset by peer
2026-04-20T04:38:05Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:56244->192.168.111.5:6443: read: connection reset by peer
2026-04-20T04:38:08Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:49222->192.168.111.5:6443: read: connection reset by peer
#2046044917897105408junit15 hours ago
2026-04-20T04:37:59Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:60360->192.168.111.5:6443: read: connection reset by peer
2026-04-20T04:38:00Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:53736->192.168.111.5:6443: read: connection reset by peer
2026-04-20T04:38:05Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:56244->192.168.111.5:6443: read: connection reset by peer
2026-04-20T04:38:08Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:49222->192.168.111.5:6443: read: connection reset by peer
#2045949215363829760junit21 hours ago
2026-04-19T21:39:43Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T22:09:25Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:36768->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:09:26Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:35274->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:09:28Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:42806->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:09:31Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:49746->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:09:33Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:50662->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:15:50Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:53300->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:15:50Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:35326->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:15:53Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:39988->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:15:54Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:50332->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:15:55Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:48770->192.168.111.5:6443: read: connection reset by peer
#2045833232590573568junit31 hours ago
2026-04-19T14:06:50Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:34752->192.168.111.5:6443: read: connection reset by peer
2026-04-19T14:06:50Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:45846->192.168.111.5:6443: read: connection reset by peer
2026-04-19T14:06:51Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:37672->192.168.111.5:6443: read: connection reset by peer
2026-04-19T14:06:54Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:38550->192.168.111.5:6443: read: connection reset by peer
#2045733608726990848junit36 hours ago
2026-04-19T07:14:47Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T07:49:47Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:50852->192.168.111.5:6443: read: connection reset by peer
2026-04-19T07:49:50Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:34154->192.168.111.5:6443: read: connection reset by peer
2026-04-19T07:49:51Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:53466->192.168.111.5:6443: read: connection reset by peer
2026-04-19T07:49:51Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:58032->192.168.111.5:6443: read: connection reset by peer
#2045733608726990848junit36 hours ago
2026-04-19T07:14:47Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T07:49:47Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:50852->192.168.111.5:6443: read: connection reset by peer
2026-04-19T07:49:50Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:34154->192.168.111.5:6443: read: connection reset by peer
2026-04-19T07:49:51Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:53466->192.168.111.5:6443: read: connection reset by peer
2026-04-19T07:49:51Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:58032->192.168.111.5:6443: read: connection reset by peer
#2045643386991415296junit42 hours ago
2026-04-19T01:47:03Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:50460->192.168.111.5:6443: read: connection reset by peer
2026-04-19T01:47:04Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:43868->192.168.111.5:6443: read: connection reset by peer
2026-04-19T01:47:05Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:59062->192.168.111.5:6443: read: connection reset by peer
2026-04-19T01:53:01Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:45312->192.168.111.5:6443: read: connection reset by peer
2026-04-19T01:53:01Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:34668->192.168.111.5:6443: read: connection reset by peer
2026-04-19T01:53:02Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:57086->192.168.111.5:6443: read: connection reset by peer
2026-04-19T01:53:04Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:41036->192.168.111.5:6443: read: connection reset by peer
2026-04-19T01:53:09Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:39606->192.168.111.5:6443: read: connection reset by peer
pull-ci-openshift-hypershift-main-e2e-conformance (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046260340093620224junit2 hours ago
# [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
fail [k8s.io/kubernetes/test/e2e/common/node/container_probe.go:1721]: DeferCleanup callback returned error: Delete "https://a63f823f78223495a870a18070c01f5f-e05c2ab3c1425d3d.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/e2e-container-probe-3499/pods/test-webserver-e30b9123-d4e0-44ce-8cfb-287cbd560575": read tcp 10.128.106.148:58886->32.193.204.217:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.18-upgrade-from-stable-4.17-e2e-aws-ovn-upgrade (all) - 6 runs, 50% failed, 167% of failures match = 83% impact
#2046249368872292352junit2 hours ago
2026-04-20T16:42:50Z node/ip-10-0-32-165.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-p9rfl0gh-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-32-165.ec2.internal?timeout=10s - read tcp 10.0.32.165:41526->10.0.127.140:6443: read: connection reset by peer
2026-04-20T17:31:14Z node/ip-10-0-26-149.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-p9rfl0gh-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-149.ec2.internal?timeout=10s - read tcp 10.0.26.149:51040->10.0.127.140:6443: read: connection reset by peer
2026-04-20T17:38:32Z node/ip-10-0-32-165.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-p9rfl0gh-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-32-165.ec2.internal?timeout=10s - read tcp 10.0.32.165:42652->10.0.62.41:6443: read: connection reset by peer
2026-04-20T17:38:33Z node/ip-10-0-26-149.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-p9rfl0gh-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-149.ec2.internal?timeout=10s - read tcp 10.0.26.149:41704->10.0.62.41:6443: read: connection reset by peer
#2046249368872292352junit2 hours ago
2026-04-20T16:42:50Z node/ip-10-0-32-165.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-p9rfl0gh-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-32-165.ec2.internal?timeout=10s - read tcp 10.0.32.165:41526->10.0.127.140:6443: read: connection reset by peer
2026-04-20T17:31:14Z node/ip-10-0-26-149.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-p9rfl0gh-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-149.ec2.internal?timeout=10s - read tcp 10.0.26.149:51040->10.0.127.140:6443: read: connection reset by peer
2026-04-20T17:38:32Z node/ip-10-0-32-165.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-p9rfl0gh-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-32-165.ec2.internal?timeout=10s - read tcp 10.0.32.165:42652->10.0.62.41:6443: read: connection reset by peer
2026-04-20T17:38:33Z node/ip-10-0-26-149.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-p9rfl0gh-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-149.ec2.internal?timeout=10s - read tcp 10.0.26.149:41704->10.0.62.41:6443: read: connection reset by peer
#2046190972441726976junit6 hours ago
2026-04-20T12:39:48Z node/ip-10-0-107-229.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-55sphzx6-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-107-229.us-west-2.compute.internal?timeout=10s - read tcp 10.0.107.229:52518->10.0.0.69:6443: read: connection reset by peer
2026-04-20T12:39:50Z node/ip-10-0-54-192.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-55sphzx6-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-54-192.us-west-2.compute.internal?timeout=10s - read tcp 10.0.54.192:55824->10.0.0.69:6443: read: connection reset by peer
2026-04-20T12:40:00Z node/ip-10-0-31-108.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-55sphzx6-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-31-108.us-west-2.compute.internal?timeout=10s - read tcp 10.0.31.108:49558->10.0.0.69:6443: read: connection reset by peer
2026-04-20T12:43:53Z node/ip-10-0-107-229.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-55sphzx6-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-107-229.us-west-2.compute.internal?timeout=10s - read tcp 10.0.107.229:55452->10.0.0.69:6443: read: connection reset by peer
2026-04-20T13:29:20Z node/ip-10-0-58-148.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-55sphzx6-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-148.us-west-2.compute.internal?timeout=10s - read tcp 10.0.58.148:60668->10.0.0.69:6443: read: connection reset by peer
#2046190972441726976junit6 hours ago
2026-04-20T12:39:48Z node/ip-10-0-107-229.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-55sphzx6-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-107-229.us-west-2.compute.internal?timeout=10s - read tcp 10.0.107.229:52518->10.0.0.69:6443: read: connection reset by peer
2026-04-20T12:39:50Z node/ip-10-0-54-192.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-55sphzx6-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-54-192.us-west-2.compute.internal?timeout=10s - read tcp 10.0.54.192:55824->10.0.0.69:6443: read: connection reset by peer
2026-04-20T12:40:00Z node/ip-10-0-31-108.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-55sphzx6-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-31-108.us-west-2.compute.internal?timeout=10s - read tcp 10.0.31.108:49558->10.0.0.69:6443: read: connection reset by peer
2026-04-20T12:43:53Z node/ip-10-0-107-229.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-55sphzx6-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-107-229.us-west-2.compute.internal?timeout=10s - read tcp 10.0.107.229:55452->10.0.0.69:6443: read: connection reset by peer
2026-04-20T13:29:20Z node/ip-10-0-58-148.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-55sphzx6-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-148.us-west-2.compute.internal?timeout=10s - read tcp 10.0.58.148:60668->10.0.0.69:6443: read: connection reset by peer
#2045678592553127936junit40 hours ago
2026-04-19T02:43:10Z node/ip-10-0-4-0.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-7czjj3g4-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-4-0.us-east-2.compute.internal?timeout=10s - read tcp 10.0.4.0:60580->10.0.100.174:6443: read: connection reset by peer
2026-04-19T02:47:09Z node/ip-10-0-20-184.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-7czjj3g4-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-20-184.us-east-2.compute.internal?timeout=10s - read tcp 10.0.20.184:53098->10.0.34.14:6443: read: connection reset by peer
2026-04-19T02:47:17Z node/ip-10-0-92-182.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-7czjj3g4-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-92-182.us-east-2.compute.internal?timeout=10s - read tcp 10.0.92.182:35316->10.0.100.174:6443: read: connection reset by peer
2026-04-19T02:47:24Z node/ip-10-0-70-113.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-7czjj3g4-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-70-113.us-east-2.compute.internal?timeout=10s - read tcp 10.0.70.113:41756->10.0.100.174:6443: read: connection reset by peer
2026-04-19T03:26:06Z node/ip-10-0-65-82.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-7czjj3g4-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-65-82.us-east-2.compute.internal?timeout=10s - read tcp 10.0.65.82:51996->10.0.34.14:6443: read: connection reset by peer
2026-04-19T03:41:04Z node/ip-10-0-92-182.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-7czjj3g4-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-92-182.us-east-2.compute.internal?timeout=10s - read tcp 10.0.92.182:59296->10.0.34.14:6443: read: connection reset by peer
#2045623795418402816junit44 hours ago
2026-04-18T23:02:50Z node/ip-10-0-74-126.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jgcq1bv3-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-74-126.us-west-2.compute.internal?timeout=10s - read tcp 10.0.74.126:51964->10.0.119.123:6443: read: connection reset by peer
2026-04-18T23:25:33Z node/ip-10-0-104-57.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jgcq1bv3-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-104-57.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045623795418402816junit44 hours ago
2026-04-18T23:25:38Z node/ip-10-0-43-66.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jgcq1bv3-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-43-66.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-18T23:53:59Z node/ip-10-0-77-15.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jgcq1bv3-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-77-15.us-west-2.compute.internal?timeout=10s - read tcp 10.0.77.15:37336->10.0.23.162:6443: read: connection reset by peer
2026-04-18T23:54:02Z node/ip-10-0-74-126.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jgcq1bv3-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-74-126.us-west-2.compute.internal?timeout=10s - read tcp 10.0.74.126:38520->10.0.119.123:6443: read: connection reset by peer
#2045623795418402816junit44 hours ago
2026-04-18T23:02:50Z node/ip-10-0-74-126.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jgcq1bv3-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-74-126.us-west-2.compute.internal?timeout=10s - read tcp 10.0.74.126:51964->10.0.119.123:6443: read: connection reset by peer
2026-04-18T23:25:33Z node/ip-10-0-104-57.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jgcq1bv3-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-104-57.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045623795418402816junit44 hours ago
2026-04-18T23:25:38Z node/ip-10-0-43-66.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jgcq1bv3-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-43-66.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-18T23:53:59Z node/ip-10-0-77-15.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jgcq1bv3-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-77-15.us-west-2.compute.internal?timeout=10s - read tcp 10.0.77.15:37336->10.0.23.162:6443: read: connection reset by peer
2026-04-18T23:54:02Z node/ip-10-0-74-126.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jgcq1bv3-2b83e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-74-126.us-west-2.compute.internal?timeout=10s - read tcp 10.0.74.126:38520->10.0.119.123:6443: read: connection reset by peer
#2045572408517070848junit2 days ago
2026-04-18T20:28:26Z node/ip-10-0-38-109.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-2cr1s22y-2b83e.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-38-109.us-east-2.compute.internal?timeout=10s - read tcp 10.0.38.109:55916->10.0.71.48:6443: read: connection reset by peer
#2045572408517070848junit2 days ago
2026-04-18T20:28:26Z node/ip-10-0-38-109.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-2cr1s22y-2b83e.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-38-109.us-east-2.compute.internal?timeout=10s - read tcp 10.0.38.109:55916->10.0.71.48:6443: read: connection reset by peer
#2045572408517070848junit2 days ago
Apr 18 20:21:51.009 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics: Patch "https://api-int.ci-op-2cr1s22y-2b83e.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-network-diagnostics/roles/network-diagnostics?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.35.211:36058->10.0.71.48:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 18 20:21:51.009 - 27s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics: Patch "https://api-int.ci-op-2cr1s22y-2b83e.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-network-diagnostics/roles/network-diagnostics?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.35.211:36058->10.0.71.48:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 18 20:22:18.243 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
periodic-ci-shiftstack-ci-release-4.22-upgrade-from-stable-4.21-e2e-openstack-ovn-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046216435755126784junit2 hours ago
2026-04-20T14:35:50Z node/b65rbqpd-97b7c-xtdfr-master-1 - reason/FailedToUpdateLease https://api-int.b65rbqpd-97b7c.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/b65rbqpd-97b7c-xtdfr-master-1?timeout=10s - read tcp 10.0.0.82:36264->10.0.0.5:6443: read: connection reset by peer
2026-04-20T14:35:51Z node/b65rbqpd-97b7c-xtdfr-master-2 - reason/FailedToUpdateLease https://api-int.b65rbqpd-97b7c.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/b65rbqpd-97b7c-xtdfr-master-2?timeout=10s - write tcp 10.0.1.54:56378->10.0.0.5:6443: write: connection reset by peer
2026-04-20T14:35:54Z node/b65rbqpd-97b7c-xtdfr-worker-0-pk248 - reason/FailedToUpdateLease https://api-int.b65rbqpd-97b7c.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/b65rbqpd-97b7c-xtdfr-worker-0-pk248?timeout=10s - read tcp 10.0.0.176:34318->10.0.0.5:6443: read: connection reset by peer
2026-04-20T14:35:55Z node/b65rbqpd-97b7c-xtdfr-worker-0-7jknk - reason/FailedToUpdateLease https://api-int.b65rbqpd-97b7c.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/b65rbqpd-97b7c-xtdfr-worker-0-7jknk?timeout=10s - read tcp 10.0.0.213:45908->10.0.0.5:6443: read: connection reset by peer
2026-04-20T14:35:55Z node/b65rbqpd-97b7c-xtdfr-worker-0-pk248 - reason/FailedToUpdateLease https://api-int.b65rbqpd-97b7c.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/b65rbqpd-97b7c-xtdfr-worker-0-pk248?timeout=10s - read tcp 10.0.0.176:34150->10.0.0.5:6443: read: connection reset by peer
2026-04-20T14:35:57Z node/b65rbqpd-97b7c-xtdfr-worker-0-7tl82 - reason/FailedToUpdateLease https://api-int.b65rbqpd-97b7c.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/b65rbqpd-97b7c-xtdfr-worker-0-7tl82?timeout=10s - read tcp 10.0.1.79:47728->10.0.0.5:6443: read: connection reset by peer
2026-04-20T14:41:10Z node/b65rbqpd-97b7c-xtdfr-master-0 - reason/FailedToUpdateLease https://api-int.b65rbqpd-97b7c.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/b65rbqpd-97b7c-xtdfr-master-0?timeout=10s - write tcp 10.0.2.123:49682->10.0.0.5:6443: write: connection reset by peer
2026-04-20T14:41:15Z node/b65rbqpd-97b7c-xtdfr-worker-0-7jknk - reason/FailedToUpdateLease https://api-int.b65rbqpd-97b7c.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/b65rbqpd-97b7c-xtdfr-worker-0-7jknk?timeout=10s - read tcp 10.0.0.213:59520->10.0.0.5:6443: read: connection reset by peer
2026-04-20T14:41:17Z node/b65rbqpd-97b7c-xtdfr-worker-0-pk248 - reason/FailedToUpdateLease https://api-int.b65rbqpd-97b7c.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/b65rbqpd-97b7c-xtdfr-worker-0-pk248?timeout=10s - read tcp 10.0.0.176:55980->10.0.0.5:6443: read: connection reset by peer
2026-04-20T15:09:54Z node/b65rbqpd-97b7c-xtdfr-master-1 - reason/FailedToUpdateLease https://api-int.b65rbqpd-97b7c.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/b65rbqpd-97b7c-xtdfr-master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-nightly-4.19-e2e-aws-ovn-upgrade-fips (all) - 9 runs, 22% failed, 400% of failures match = 89% impact
#2046233384505577472junit2 hours ago
2026-04-20T15:12:53Z node/ip-10-0-126-177.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-d3fv2wnj-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-126-177.us-west-2.compute.internal?timeout=10s - read tcp 10.0.126.177:33728->10.0.1.245:6443: read: connection reset by peer
2026-04-20T15:12:58Z node/ip-10-0-52-16.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-d3fv2wnj-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-16.us-west-2.compute.internal?timeout=10s - read tcp 10.0.52.16:32962->10.0.1.245:6443: read: connection reset by peer
2026-04-20T15:37:17Z node/ip-10-0-85-218.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-d3fv2wnj-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-85-218.us-west-2.compute.internal?timeout=10s - read tcp 10.0.85.218:44012->10.0.86.241:6443: read: connection reset by peer
2026-04-20T15:37:50Z node/ip-10-0-55-192.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-d3fv2wnj-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-55-192.us-west-2.compute.internal?timeout=10s - read tcp 10.0.55.192:52322->10.0.86.241:6443: read: connection reset by peer
2026-04-20T16:14:08Z node/ip-10-0-85-133.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-d3fv2wnj-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-85-133.us-west-2.compute.internal?timeout=10s - read tcp 10.0.85.133:56992->10.0.1.245:6443: read: connection reset by peer
2026-04-20T16:21:45Z node/ip-10-0-55-192.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-d3fv2wnj-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-55-192.us-west-2.compute.internal?timeout=10s - read tcp 10.0.55.192:54884->10.0.86.241:6443: read: connection reset by peer
#2046179947717857280junit7 hours ago
2026-04-20T12:08:15Z node/ip-10-0-59-56.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wvkw1g03-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-59-56.ec2.internal?timeout=10s - read tcp 10.0.59.56:44232->10.0.107.96:6443: read: connection reset by peer
2026-04-20T12:08:18Z node/ip-10-0-5-109.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wvkw1g03-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-109.ec2.internal?timeout=10s - read tcp 10.0.5.109:59676->10.0.107.96:6443: read: connection reset by peer
2026-04-20T12:08:21Z node/ip-10-0-8-231.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wvkw1g03-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-231.ec2.internal?timeout=10s - read tcp 10.0.8.231:57844->10.0.60.127:6443: read: connection reset by peer
2026-04-20T12:08:26Z node/ip-10-0-93-131.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wvkw1g03-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-93-131.ec2.internal?timeout=10s - read tcp 10.0.93.131:32956->10.0.60.127:6443: read: connection reset by peer
2026-04-20T12:16:10Z node/ip-10-0-8-231.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wvkw1g03-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-231.ec2.internal?timeout=10s - read tcp 10.0.8.231:32896->10.0.107.96:6443: read: connection reset by peer
2026-04-20T12:25:03Z node/ip-10-0-23-184.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wvkw1g03-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-184.ec2.internal?timeout=10s - read tcp 10.0.23.184:58792->10.0.107.96:6443: read: connection reset by peer
2026-04-20T12:32:26Z node/ip-10-0-8-231.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wvkw1g03-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-231.ec2.internal?timeout=10s - read tcp 10.0.8.231:34694->10.0.107.96:6443: read: connection reset by peer
#2046045933216468992junit16 hours ago
2026-04-20T03:10:11Z node/ip-10-0-100-45.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mbljcf62-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-45.us-west-1.compute.internal?timeout=10s - read tcp 10.0.100.45:58998->10.0.22.74:6443: read: connection reset by peer
2026-04-20T03:34:29Z node/ip-10-0-89-111.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mbljcf62-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-111.us-west-1.compute.internal?timeout=10s - read tcp 10.0.89.111:46016->10.0.125.68:6443: read: connection reset by peer
2026-04-20T03:42:15Z node/ip-10-0-100-45.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mbljcf62-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-45.us-west-1.compute.internal?timeout=10s - read tcp 10.0.100.45:51468->10.0.22.74:6443: read: connection reset by peer
#2046045933216468992junit16 hours ago
2026-04-20T03:10:11Z node/ip-10-0-100-45.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mbljcf62-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-45.us-west-1.compute.internal?timeout=10s - read tcp 10.0.100.45:58998->10.0.22.74:6443: read: connection reset by peer
2026-04-20T03:34:29Z node/ip-10-0-89-111.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mbljcf62-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-111.us-west-1.compute.internal?timeout=10s - read tcp 10.0.89.111:46016->10.0.125.68:6443: read: connection reset by peer
2026-04-20T03:42:15Z node/ip-10-0-100-45.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mbljcf62-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-45.us-west-1.compute.internal?timeout=10s - read tcp 10.0.100.45:51468->10.0.22.74:6443: read: connection reset by peer
#2045982174485680128junit21 hours ago
2026-04-19T22:32:07Z node/ip-10-0-9-193.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4hy1si0t-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-193.us-east-2.compute.internal?timeout=10s - read tcp 10.0.9.193:60734->10.0.61.185:6443: read: connection reset by peer
2026-04-19T22:35:58Z node/ip-10-0-82-178.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4hy1si0t-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-82-178.us-east-2.compute.internal?timeout=10s - read tcp 10.0.82.178:48418->10.0.78.7:6443: read: connection reset by peer
2026-04-19T22:36:01Z node/ip-10-0-44-205.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4hy1si0t-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-44-205.us-east-2.compute.internal?timeout=10s - read tcp 10.0.44.205:54198->10.0.78.7:6443: read: connection reset by peer
2026-04-19T22:54:53Z node/ip-10-0-44-205.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4hy1si0t-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-44-205.us-east-2.compute.internal?timeout=10s - read tcp 10.0.44.205:57444->10.0.61.185:6443: read: connection reset by peer
2026-04-19T22:58:47Z node/ip-10-0-9-193.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4hy1si0t-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-193.us-east-2.compute.internal?timeout=10s - read tcp 10.0.9.193:47370->10.0.61.185:6443: read: connection reset by peer
2026-04-19T22:58:48Z node/ip-10-0-44-205.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4hy1si0t-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-44-205.us-east-2.compute.internal?timeout=10s - read tcp 10.0.44.205:60452->10.0.78.7:6443: read: connection reset by peer
2026-04-19T22:58:53Z node/ip-10-0-116-69.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4hy1si0t-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-116-69.us-east-2.compute.internal?timeout=10s - read tcp 10.0.116.69:37764->10.0.61.185:6443: read: connection reset by peer
2026-04-19T22:58:57Z node/ip-10-0-9-193.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4hy1si0t-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-193.us-east-2.compute.internal?timeout=10s - read tcp 10.0.9.193:47180->10.0.61.185:6443: read: connection reset by peer
#2045916671729733632junit25 hours ago
2026-04-19T18:34:10Z node/ip-10-0-108-244.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fmrp48gd-5c6e6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-108-244.us-east-2.compute.internal?timeout=10s - read tcp 10.0.108.244:55066->10.0.20.163:6443: read: connection reset by peer
2026-04-19T18:34:15Z node/ip-10-0-106-96.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fmrp48gd-5c6e6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-106-96.us-east-2.compute.internal?timeout=10s - read tcp 10.0.106.96:45812->10.0.20.163:6443: read: connection reset by peer
2026-04-19T18:38:45Z node/ip-10-0-73-242.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fmrp48gd-5c6e6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-73-242.us-east-2.compute.internal?timeout=10s - read tcp 10.0.73.242:55814->10.0.84.67:6443: read: connection reset by peer
2026-04-19T18:55:46Z node/ip-10-0-37-65.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fmrp48gd-5c6e6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-37-65.us-east-2.compute.internal?timeout=10s - read tcp 10.0.37.65:55750->10.0.84.67:6443: read: connection reset by peer
2026-04-19T19:03:24Z node/ip-10-0-4-111.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fmrp48gd-5c6e6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-4-111.us-east-2.compute.internal?timeout=10s - read tcp 10.0.4.111:53558->10.0.84.67:6443: read: connection reset by peer
2026-04-19T19:03:26Z node/ip-10-0-108-244.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fmrp48gd-5c6e6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-108-244.us-east-2.compute.internal?timeout=10s - read tcp 10.0.108.244:54140->10.0.84.67:6443: read: connection reset by peer
#2045834610142613504junit30 hours ago
2026-04-19T13:00:38Z node/ip-10-0-127-151.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i6lvh4rk-5c6e6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-151.us-west-2.compute.internal?timeout=10s - read tcp 10.0.127.151:33196->10.0.3.154:6443: read: connection reset by peer
2026-04-19T13:00:38Z node/ip-10-0-61-50.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i6lvh4rk-5c6e6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-50.us-west-2.compute.internal?timeout=10s - read tcp 10.0.61.50:59818->10.0.3.154:6443: read: connection reset by peer
2026-04-19T13:01:08Z node/ip-10-0-49-153.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i6lvh4rk-5c6e6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-49-153.us-west-2.compute.internal?timeout=10s - read tcp 10.0.49.153:43500->10.0.82.7:6443: read: connection reset by peer
2026-04-19T13:04:43Z node/ip-10-0-49-153.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i6lvh4rk-5c6e6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-49-153.us-west-2.compute.internal?timeout=10s - read tcp 10.0.49.153:37258->10.0.3.154:6443: read: connection reset by peer
2026-04-19T13:05:23Z node/ip-10-0-127-151.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i6lvh4rk-5c6e6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-151.us-west-2.compute.internal?timeout=10s - read tcp 10.0.127.151:57848->10.0.82.7:6443: read: connection reset by peer
2026-04-19T13:08:48Z node/ip-10-0-101-48.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i6lvh4rk-5c6e6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-101-48.us-west-2.compute.internal?timeout=10s - read tcp 10.0.101.48:51520->10.0.3.154:6443: read: connection reset by peer
2026-04-19T13:08:57Z node/ip-10-0-61-50.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i6lvh4rk-5c6e6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-50.us-west-2.compute.internal?timeout=10s - read tcp 10.0.61.50:46766->10.0.3.154:6443: read: connection reset by peer
2026-04-19T13:56:37Z node/ip-10-0-49-153.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i6lvh4rk-5c6e6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-49-153.us-west-2.compute.internal?timeout=10s - read tcp 10.0.49.153:44010->10.0.3.154:6443: read: connection reset by peer
2026-04-19T14:04:19Z node/ip-10-0-101-48.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i6lvh4rk-5c6e6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-101-48.us-west-2.compute.internal?timeout=10s - read tcp 10.0.101.48:49904->10.0.3.154:6443: read: connection reset by peer
2026-04-19T14:04:24Z node/ip-10-0-127-151.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i6lvh4rk-5c6e6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-151.us-west-2.compute.internal?timeout=10s - read tcp 10.0.127.151:59636->10.0.3.154:6443: read: connection reset by peer
#2045741305237082112junit36 hours ago
2026-04-19T06:36:30Z node/ip-10-0-84-242.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qrr7zn9x-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-84-242.ec2.internal?timeout=10s - read tcp 10.0.84.242:60678->10.0.86.208:6443: read: connection reset by peer
2026-04-19T06:36:32Z node/ip-10-0-8-157.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qrr7zn9x-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-157.ec2.internal?timeout=10s - read tcp 10.0.8.157:60886->10.0.86.208:6443: read: connection reset by peer
2026-04-19T06:36:37Z node/ip-10-0-67-29.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qrr7zn9x-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-67-29.ec2.internal?timeout=10s - read tcp 10.0.67.29:53858->10.0.86.208:6443: read: connection reset by peer
2026-04-19T06:59:39Z node/ip-10-0-23-246.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qrr7zn9x-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-246.ec2.internal?timeout=10s - read tcp 10.0.23.246:42570->10.0.1.111:6443: read: connection reset by peer
2026-04-19T06:59:42Z node/ip-10-0-84-242.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qrr7zn9x-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-84-242.ec2.internal?timeout=10s - read tcp 10.0.84.242:42376->10.0.86.208:6443: read: connection reset by peer
#2045741305237082112junit36 hours ago
2026-04-19T06:36:30Z node/ip-10-0-84-242.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qrr7zn9x-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-84-242.ec2.internal?timeout=10s - read tcp 10.0.84.242:60678->10.0.86.208:6443: read: connection reset by peer
2026-04-19T06:36:32Z node/ip-10-0-8-157.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qrr7zn9x-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-157.ec2.internal?timeout=10s - read tcp 10.0.8.157:60886->10.0.86.208:6443: read: connection reset by peer
2026-04-19T06:36:37Z node/ip-10-0-67-29.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qrr7zn9x-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-67-29.ec2.internal?timeout=10s - read tcp 10.0.67.29:53858->10.0.86.208:6443: read: connection reset by peer
2026-04-19T06:59:39Z node/ip-10-0-23-246.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qrr7zn9x-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-246.ec2.internal?timeout=10s - read tcp 10.0.23.246:42570->10.0.1.111:6443: read: connection reset by peer
2026-04-19T06:59:42Z node/ip-10-0-84-242.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qrr7zn9x-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-84-242.ec2.internal?timeout=10s - read tcp 10.0.84.242:42376->10.0.86.208:6443: read: connection reset by peer
#2045646765599756288junit42 hours ago
2026-04-19T00:39:26Z node/ip-10-0-11-55.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x4ik9hl6-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-55.us-east-2.compute.internal?timeout=10s - read tcp 10.0.11.55:38874->10.0.125.238:6443: read: connection reset by peer
2026-04-19T00:43:20Z node/ip-10-0-112-138.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x4ik9hl6-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-112-138.us-east-2.compute.internal?timeout=10s - read tcp 10.0.112.138:38910->10.0.32.135:6443: read: connection reset by peer
2026-04-19T00:43:29Z node/ip-10-0-16-101.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x4ik9hl6-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-101.us-east-2.compute.internal?timeout=10s - read tcp 10.0.16.101:41902->10.0.125.238:6443: read: connection reset by peer
2026-04-19T01:20:06Z node/ip-10-0-119-125.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x4ik9hl6-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-119-125.us-east-2.compute.internal?timeout=10s - read tcp 10.0.119.125:34420->10.0.32.135:6443: read: connection reset by peer
2026-04-19T01:34:44Z node/ip-10-0-16-101.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x4ik9hl6-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-101.us-east-2.compute.internal?timeout=10s - read tcp 10.0.16.101:35572->10.0.125.238:6443: read: connection reset by peer
#2045646765599756288junit42 hours ago
2026-04-19T00:39:26Z node/ip-10-0-11-55.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x4ik9hl6-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-55.us-east-2.compute.internal?timeout=10s - read tcp 10.0.11.55:38874->10.0.125.238:6443: read: connection reset by peer
2026-04-19T00:43:20Z node/ip-10-0-112-138.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x4ik9hl6-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-112-138.us-east-2.compute.internal?timeout=10s - read tcp 10.0.112.138:38910->10.0.32.135:6443: read: connection reset by peer
2026-04-19T00:43:29Z node/ip-10-0-16-101.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x4ik9hl6-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-101.us-east-2.compute.internal?timeout=10s - read tcp 10.0.16.101:41902->10.0.125.238:6443: read: connection reset by peer
2026-04-19T01:20:06Z node/ip-10-0-119-125.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x4ik9hl6-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-119-125.us-east-2.compute.internal?timeout=10s - read tcp 10.0.119.125:34420->10.0.32.135:6443: read: connection reset by peer
2026-04-19T01:34:44Z node/ip-10-0-16-101.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x4ik9hl6-5c6e6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-101.us-east-2.compute.internal?timeout=10s - read tcp 10.0.16.101:35572->10.0.125.238:6443: read: connection reset by peer
periodic-ci-openshift-release-main-okd-scos-4.23-e2e-aws-ovn (all) - 1 runs, 0% failed, 100% of runs match
#2046252945653108736junit3 hours ago
2026-04-20T16:43:05Z node/ip-10-0-55-94.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-tg4bc3gr-2a49d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-55-94.us-west-2.compute.internal?timeout=10s - read tcp 10.0.55.94:49708->10.0.71.114:6443: read: connection reset by peer
2026-04-20T16:53:27Z node/ip-10-0-112-24.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-tg4bc3gr-2a49d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-112-24.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-okd-scos-4.23-e2e-aws-ovn-techpreview (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046252948324880384junit3 hours ago
2026-04-20T16:45:46Z node/ip-10-0-121-176.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-gm0vjil9-7d41f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-121-176.us-east-2.compute.internal?timeout=10s - read tcp 10.0.121.176:39170->10.0.32.211:6443: read: connection reset by peer
2026-04-20T16:45:47Z node/ip-10-0-119-3.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-gm0vjil9-7d41f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-119-3.us-east-2.compute.internal?timeout=10s - read tcp 10.0.119.3:37766->10.0.124.212:6443: read: connection reset by peer
2026-04-20T16:45:51Z node/ip-10-0-36-81.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-gm0vjil9-7d41f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-81.us-east-2.compute.internal?timeout=10s - read tcp 10.0.36.81:47588->10.0.32.211:6443: read: connection reset by peer
2026-04-20T16:46:18Z node/ip-10-0-49-145.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-gm0vjil9-7d41f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-49-145.us-east-2.compute.internal?timeout=10s - read tcp 10.0.49.145:47428->10.0.32.211:6443: read: connection reset by peer
periodic-ci-openshift-multiarch-main-nightly-4.21-upgrade-from-stable-4.20-ocp-e2e-upgrade-aws-ovn-multi-a-a (all) - 6 runs, 67% failed, 150% of failures match = 100% impact
#2046232032232607744junit3 hours ago
2026-04-20T15:40:22Z node/ip-10-0-105-154.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-4k1drimf-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-105-154.ec2.internal?timeout=10s - read tcp 10.0.105.154:59626->10.0.60.18:6443: read: connection reset by peer
2026-04-20T15:40:30Z node/ip-10-0-106-219.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-4k1drimf-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-106-219.ec2.internal?timeout=10s - read tcp 10.0.106.219:43194->10.0.68.27:6443: read: connection reset by peer
2026-04-20T15:40:34Z node/ip-10-0-51-122.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-4k1drimf-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-122.ec2.internal?timeout=10s - read tcp 10.0.51.122:44514->10.0.68.27:6443: read: connection reset by peer
2026-04-20T15:48:14Z node/ip-10-0-70-234.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-4k1drimf-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-70-234.ec2.internal?timeout=10s - read tcp 10.0.70.234:38180->10.0.60.18:6443: read: connection reset by peer
2026-04-20T15:48:21Z node/ip-10-0-106-219.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-4k1drimf-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-106-219.ec2.internal?timeout=10s - read tcp 10.0.106.219:51412->10.0.60.18:6443: read: connection reset by peer
2026-04-20T15:48:45Z node/ip-10-0-2-199.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-4k1drimf-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-2-199.ec2.internal?timeout=10s - read tcp 10.0.2.199:51070->10.0.60.18:6443: read: connection reset by peer
2026-04-20T16:39:20Z node/ip-10-0-2-199.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-4k1drimf-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-2-199.ec2.internal?timeout=10s - read tcp 10.0.2.199:47542->10.0.60.18:6443: read: connection reset by peer
#2046101993579089920junit11 hours ago
2026-04-20T06:38:34Z node/ip-10-0-114-183.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l3w8k785-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-114-183.us-west-2.compute.internal?timeout=10s - read tcp 10.0.114.183:54958->10.0.65.225:6443: read: connection reset by peer
2026-04-20T06:38:40Z node/ip-10-0-113-141.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l3w8k785-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-113-141.us-west-2.compute.internal?timeout=10s - read tcp 10.0.113.141:50246->10.0.65.225:6443: read: connection reset by peer
2026-04-20T07:06:52Z node/ip-10-0-114-183.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l3w8k785-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-114-183.us-west-2.compute.internal?timeout=10s - read tcp 10.0.114.183:46150->10.0.0.230:6443: read: connection reset by peer
2026-04-20T07:06:59Z node/ip-10-0-69-175.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l3w8k785-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-69-175.us-west-2.compute.internal?timeout=10s - read tcp 10.0.69.175:37638->10.0.0.230:6443: read: connection reset by peer
2026-04-20T07:50:20Z node/ip-10-0-38-174.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l3w8k785-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-38-174.us-west-2.compute.internal?timeout=10s - context deadline exceeded
#2045988293278961664junit19 hours ago
Apr 19 23:30:03.019 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts: Patch "https://api-int.ci-op-ntrlmwj8-bcfb8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-multus/rolebindings/multus-whereabouts?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.84.98:41360->10.0.98.238:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 19 23:30:03.019 - 28s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts: Patch "https://api-int.ci-op-ntrlmwj8-bcfb8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-multus/rolebindings/multus-whereabouts?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.84.98:41360->10.0.98.238:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 19 23:30:31.024 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2045988293278961664junit19 hours ago
2026-04-19T23:21:37Z node/ip-10-0-89-50.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ntrlmwj8-bcfb8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-50.ec2.internal?timeout=10s - read tcp 10.0.89.50:53288->10.0.33.32:6443: read: connection reset by peer
2026-04-19T23:21:40Z node/ip-10-0-89-92.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ntrlmwj8-bcfb8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-92.ec2.internal?timeout=10s - read tcp 10.0.89.92:51986->10.0.33.32:6443: read: connection reset by peer
2026-04-19T23:21:40Z node/ip-10-0-84-98.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ntrlmwj8-bcfb8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-84-98.ec2.internal?timeout=10s - read tcp 10.0.84.98:52850->10.0.98.238:6443: read: connection reset by peer
2026-04-19T23:25:35Z node/ip-10-0-89-92.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ntrlmwj8-bcfb8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-92.ec2.internal?timeout=10s - read tcp 10.0.89.92:42600->10.0.33.32:6443: read: connection reset by peer
2026-04-19T23:29:37Z node/ip-10-0-89-50.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ntrlmwj8-bcfb8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-50.ec2.internal?timeout=10s - read tcp 10.0.89.50:36236->10.0.33.32:6443: read: connection reset by peer
2026-04-19T23:29:40Z node/ip-10-0-84-98.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ntrlmwj8-bcfb8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-84-98.ec2.internal?timeout=10s - read tcp 10.0.84.98:51072->10.0.33.32:6443: read: connection reset by peer
2026-04-20T00:22:08Z node/ip-10-0-37-147.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ntrlmwj8-bcfb8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-37-147.ec2.internal?timeout=10s - read tcp 10.0.37.147:54408->10.0.33.32:6443: read: connection reset by peer
#2045882169141760000junit26 hours ago
2026-04-19T16:27:40Z node/ip-10-0-23-242.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ftb3gnqs-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-242.us-east-2.compute.internal?timeout=10s - read tcp 10.0.23.242:55302->10.0.91.92:6443: read: connection reset by peer
2026-04-19T16:27:46Z node/ip-10-0-52-103.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ftb3gnqs-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-103.us-east-2.compute.internal?timeout=10s - read tcp 10.0.52.103:33464->10.0.91.92:6443: read: connection reset by peer
2026-04-19T16:31:42Z node/ip-10-0-41-123.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ftb3gnqs-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-41-123.us-east-2.compute.internal?timeout=10s - read tcp 10.0.41.123:52306->10.0.91.92:6443: read: connection reset by peer
2026-04-19T16:35:38Z node/ip-10-0-23-242.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ftb3gnqs-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-242.us-east-2.compute.internal?timeout=10s - read tcp 10.0.23.242:46970->10.0.41.81:6443: read: connection reset by peer
2026-04-19T17:10:34Z node/ip-10-0-62-108.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ftb3gnqs-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-108.us-east-2.compute.internal?timeout=10s - context deadline exceeded
2026-04-19T17:10:35Z node/ip-10-0-41-123.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ftb3gnqs-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-41-123.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T17:29:18Z node/ip-10-0-122-47.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ftb3gnqs-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-122-47.us-east-2.compute.internal?timeout=10s - read tcp 10.0.122.47:60970->10.0.91.92:6443: read: connection reset by peer
#2045882169141760000junit26 hours ago
2026-04-19T16:09:52Z node/ip-10-0-105-166.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ftb3gnqs-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-105-166.us-east-2.compute.internal?timeout=10s - read tcp 10.0.105.166:57578->10.0.41.81:6443: read: connection reset by peer
2026-04-19T16:27:40Z node/ip-10-0-23-242.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ftb3gnqs-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-242.us-east-2.compute.internal?timeout=10s - read tcp 10.0.23.242:55302->10.0.91.92:6443: read: connection reset by peer
2026-04-19T16:27:46Z node/ip-10-0-52-103.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ftb3gnqs-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-103.us-east-2.compute.internal?timeout=10s - read tcp 10.0.52.103:33464->10.0.91.92:6443: read: connection reset by peer
2026-04-19T16:31:42Z node/ip-10-0-41-123.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ftb3gnqs-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-41-123.us-east-2.compute.internal?timeout=10s - read tcp 10.0.41.123:52306->10.0.91.92:6443: read: connection reset by peer
2026-04-19T16:35:38Z node/ip-10-0-23-242.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ftb3gnqs-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-242.us-east-2.compute.internal?timeout=10s - read tcp 10.0.23.242:46970->10.0.41.81:6443: read: connection reset by peer
2026-04-19T17:10:34Z node/ip-10-0-62-108.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ftb3gnqs-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-108.us-east-2.compute.internal?timeout=10s - context deadline exceeded
#2045723528468107264junit36 hours ago
2026-04-19T05:30:01Z node/ip-10-0-51-168.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kwwy12zt-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-168.us-west-2.compute.internal?timeout=10s - read tcp 10.0.51.168:46370->10.0.3.108:6443: read: connection reset by peer
2026-04-19T05:30:04Z node/ip-10-0-69-162.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kwwy12zt-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-69-162.us-west-2.compute.internal?timeout=10s - read tcp 10.0.69.162:43744->10.0.101.88:6443: read: connection reset by peer
2026-04-19T05:52:19Z node/ip-10-0-51-168.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kwwy12zt-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-168.us-west-2.compute.internal?timeout=10s - read tcp 10.0.51.168:53756->10.0.101.88:6443: read: connection reset by peer
2026-04-19T05:52:26Z node/ip-10-0-108-139.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kwwy12zt-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-108-139.us-west-2.compute.internal?timeout=10s - read tcp 10.0.108.139:51046->10.0.3.108:6443: read: connection reset by peer
2026-04-19T05:52:26Z node/ip-10-0-69-162.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kwwy12zt-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-69-162.us-west-2.compute.internal?timeout=10s - read tcp 10.0.69.162:37298->10.0.101.88:6443: read: connection reset by peer
2026-04-19T05:56:24Z node/ip-10-0-73-61.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kwwy12zt-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-73-61.us-west-2.compute.internal?timeout=10s - read tcp 10.0.73.61:48358->10.0.3.108:6443: read: connection reset by peer
2026-04-19T06:35:31Z node/ip-10-0-69-162.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kwwy12zt-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-69-162.us-west-2.compute.internal?timeout=10s - read tcp 10.0.69.162:52752->10.0.3.108:6443: read: connection reset by peer
2026-04-19T06:36:05Z node/ip-10-0-73-61.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kwwy12zt-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-73-61.us-west-2.compute.internal?timeout=10s - read tcp 10.0.73.61:55436->10.0.101.88:6443: read: connection reset by peer
2026-04-19T06:39:13Z node/ip-10-0-58-2.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kwwy12zt-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-2.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045604581085286400junit45 hours ago
2026-04-18T21:51:12Z node/ip-10-0-11-49.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jycltsgh-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-49.us-west-2.compute.internal?timeout=10s - read tcp 10.0.11.49:51930->10.0.115.16:6443: read: connection reset by peer
2026-04-18T21:51:17Z node/ip-10-0-75-152.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jycltsgh-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-75-152.us-west-2.compute.internal?timeout=10s - read tcp 10.0.75.152:60882->10.0.115.16:6443: read: connection reset by peer
2026-04-18T21:55:13Z node/ip-10-0-11-59.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jycltsgh-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-59.us-west-2.compute.internal?timeout=10s - read tcp 10.0.11.59:38952->10.0.115.16:6443: read: connection reset by peer
2026-04-18T21:55:24Z node/ip-10-0-11-59.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jycltsgh-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-59.us-west-2.compute.internal?timeout=10s - read tcp 10.0.11.59:38624->10.0.115.16:6443: read: connection reset by peer
2026-04-18T21:59:08Z node/ip-10-0-11-59.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jycltsgh-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-59.us-west-2.compute.internal?timeout=10s - read tcp 10.0.11.59:36688->10.0.32.49:6443: read: connection reset by peer
2026-04-18T22:36:42Z node/ip-10-0-73-7.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jycltsgh-bcfb8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-73-7.us-west-2.compute.internal?timeout=10s - read tcp 10.0.73.7:39958->10.0.115.16:6443: read: connection reset by peer
periodic-ci-openshift-release-main-okd-scos-4.23-e2e-gcp (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046252950312980480junit3 hours ago
# [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs [Suite:openshift/conformance/parallel] [Suite:k8s]
fail [k8s.io/kubernetes/test/e2e/storage/drivers/csi.go:323]: deploying csi-hostpath driver: create CSIDriver: Post "https://api.ci-op-ybthf43x-4ff3b.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/storage.k8s.io/v1/csidrivers": read tcp 10.128.176.37:59526->34.107.188.192:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.16-e2e-aws-sdn-serial (all) - 2 runs, 0% failed, 100% of runs match
#2046233260224155648junit3 hours ago
2026-04-20T16:39:45Z node/ip-10-0-83-103.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-z8k9xcvg-9d319.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-83-103.ec2.internal?timeout=10s - read tcp 10.0.83.103:56914->10.0.71.206:6443: read: connection reset by peer
2026-04-20T16:44:18Z node/ip-10-0-36-172.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-z8k9xcvg-9d319.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-172.ec2.internal?timeout=10s - read tcp 10.0.36.172:36404->10.0.25.156:6443: read: connection reset by peer
2026-04-20T16:52:29Z node/ip-10-0-77-217.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-z8k9xcvg-9d319.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-77-217.ec2.internal?timeout=10s - read tcp 10.0.77.217:39540->10.0.71.206:6443: read: connection reset by peer
#2045626233911250944junit44 hours ago
2026-04-19T01:04:30Z node/ip-10-0-89-67.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-n26vqims-9d319.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-67.ec2.internal?timeout=10s - read tcp 10.0.89.67:52446->10.0.74.49:6443: read: connection reset by peer
2026-04-19T01:08:32Z node/ip-10-0-29-48.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-n26vqims-9d319.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-29-48.ec2.internal?timeout=10s - read tcp 10.0.29.48:33460->10.0.74.49:6443: read: connection reset by peer
2026-04-19T01:12:29Z node/ip-10-0-89-67.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-n26vqims-9d319.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-67.ec2.internal?timeout=10s - read tcp 10.0.89.67:52694->10.0.43.68:6443: read: connection reset by peer
2026-04-19T01:12:29Z node/ip-10-0-127-21.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-n26vqims-9d319.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-21.ec2.internal?timeout=10s - read tcp 10.0.127.21:55778->10.0.74.49:6443: read: connection reset by peer
periodic-ci-openshift-multiarch-main-nightly-4.15-upgrade-from-stable-4.14-ocp-e2e-aws-sdn-arm64 (all) - 2 runs, 50% failed, 200% of failures match = 100% impact
#2046235066077548544junit3 hours ago
2026-04-20T15:33:13Z node/ip-10-0-107-67.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1npx8ldx-8f1da.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-107-67.us-west-2.compute.internal?timeout=10s - read tcp 10.0.107.67:32836->10.0.92.109:6443: read: connection reset by peer
2026-04-20T15:36:58Z node/ip-10-0-107-67.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1npx8ldx-8f1da.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-107-67.us-west-2.compute.internal?timeout=10s - read tcp 10.0.107.67:58482->10.0.10.52:6443: read: connection reset by peer
2026-04-20T15:37:07Z node/ip-10-0-69-104.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1npx8ldx-8f1da.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-69-104.us-west-2.compute.internal?timeout=10s - read tcp 10.0.69.104:58898->10.0.10.52:6443: read: connection reset by peer
2026-04-20T16:23:34Z node/ip-10-0-107-67.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1npx8ldx-8f1da.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-107-67.us-west-2.compute.internal?timeout=10s - read tcp 10.0.107.67:40562->10.0.10.52:6443: read: connection reset by peer
2026-04-20T16:23:48Z node/ip-10-0-69-104.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1npx8ldx-8f1da.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-69-104.us-west-2.compute.internal?timeout=10s - read tcp 10.0.69.104:59088->10.0.92.109:6443: read: connection reset by peer
#2046235066077548544junit3 hours ago
2026-04-20T15:33:13Z node/ip-10-0-107-67.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1npx8ldx-8f1da.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-107-67.us-west-2.compute.internal?timeout=10s - read tcp 10.0.107.67:32836->10.0.92.109:6443: read: connection reset by peer
2026-04-20T15:36:58Z node/ip-10-0-107-67.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1npx8ldx-8f1da.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-107-67.us-west-2.compute.internal?timeout=10s - read tcp 10.0.107.67:58482->10.0.10.52:6443: read: connection reset by peer
2026-04-20T15:37:07Z node/ip-10-0-69-104.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1npx8ldx-8f1da.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-69-104.us-west-2.compute.internal?timeout=10s - read tcp 10.0.69.104:58898->10.0.10.52:6443: read: connection reset by peer
2026-04-20T16:23:34Z node/ip-10-0-107-67.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1npx8ldx-8f1da.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-107-67.us-west-2.compute.internal?timeout=10s - read tcp 10.0.107.67:40562->10.0.10.52:6443: read: connection reset by peer
2026-04-20T16:23:48Z node/ip-10-0-69-104.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1npx8ldx-8f1da.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-69-104.us-west-2.compute.internal?timeout=10s - read tcp 10.0.69.104:59088->10.0.92.109:6443: read: connection reset by peer
#2045872561652240384junit28 hours ago
2026-04-19T15:50:20Z node/ip-10-0-51-52.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-0818b8gl-8f1da.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-52.ec2.internal?timeout=10s - read tcp 10.0.51.52:34622->10.0.109.27:6443: read: connection reset by peer
2026-04-19T16:26:28Z node/ip-10-0-7-169.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-0818b8gl-8f1da.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-7-169.ec2.internal?timeout=10s - read tcp 10.0.7.169:36070->10.0.10.240:6443: read: connection reset by peer
2026-04-19T16:26:35Z node/ip-10-0-48-10.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-0818b8gl-8f1da.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-48-10.ec2.internal?timeout=10s - read tcp 10.0.48.10:43286->10.0.109.27:6443: read: connection reset by peer
#2045872561652240384junit28 hours ago
Apr 19 16:13:14.235 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
Apr 19 16:20:57.015 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /clusternetworks.network.openshift.io: failed to apply / update (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /clusternetworks.network.openshift.io: Patch "https://api-int.ci-op-0818b8gl-8f1da.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusternetworks.network.openshift.io?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.75.59:40216->10.0.109.27:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 19 16:20:57.015 - 42s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /clusternetworks.network.openshift.io: failed to apply / update (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /clusternetworks.network.openshift.io: Patch "https://api-int.ci-op-0818b8gl-8f1da.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusternetworks.network.openshift.io?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.75.59:40216->10.0.109.27:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 19 16:21:39.584 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2045872561652240384junit28 hours ago
2026-04-19T15:50:20Z node/ip-10-0-51-52.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-0818b8gl-8f1da.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-52.ec2.internal?timeout=10s - read tcp 10.0.51.52:34622->10.0.109.27:6443: read: connection reset by peer
2026-04-19T16:26:28Z node/ip-10-0-7-169.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-0818b8gl-8f1da.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-7-169.ec2.internal?timeout=10s - read tcp 10.0.7.169:36070->10.0.10.240:6443: read: connection reset by peer
2026-04-19T16:26:35Z node/ip-10-0-48-10.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-0818b8gl-8f1da.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-48-10.ec2.internal?timeout=10s - read tcp 10.0.48.10:43286->10.0.109.27:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.13-upgrade-from-stable-4.12-e2e-aws-ovn-upgrade (all) - 5 runs, 40% failed, 100% of failures match = 40% impact
#2046233176170303488junit3 hours ago
Apr 20 15:16:46.567 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Cluster operator kube-apiserver is updating versions\n* Could not update flowschema "openshift-etcd-operator" (79 of 845): the server does not recognize this resource, check extension API servers
Apr 20 15:17:08.167 E clusteroperator/network condition/Degraded status/True reason/ApplyOperatorConfig changed: Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding: Patch "https://api-int.ci-op-kpc4jnp1-706e2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/metrics-daemon-sa-rolebinding?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.144.30:58060->10.0.216.193:6443: read: connection reset by peer
Apr 20 15:17:08.167 - 37s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding: Patch "https://api-int.ci-op-kpc4jnp1-706e2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/metrics-daemon-sa-rolebinding?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.144.30:58060->10.0.216.193:6443: read: connection reset by peer
Apr 20 15:23:09.155 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-bcd494666-h8j64 node/ip-10-0-194-148.us-west-1.compute.internal uid/18fe63d5-20fd-47e2-b86a-09a629cedd0c container/kube-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#2046233176170303488junit3 hours ago
Apr 20 15:17:08.167 - 37s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding: Patch "https://api-int.ci-op-kpc4jnp1-706e2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/metrics-daemon-sa-rolebinding?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.144.30:58060->10.0.216.193:6443: read: connection reset by peer
#2045695807922900992junit39 hours ago
Apr 19 03:42:38.824 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Cluster operator kube-apiserver is updating versions\n* Could not update flowschema "openshift-etcd-operator" (79 of 845): the server does not recognize this resource, check extension API servers
Apr 19 03:46:57.769 E clusteroperator/network condition/Degraded status/True reason/ApplyOperatorConfig changed: Error while updating operator configuration: could not apply (/v1, Kind=Service) openshift-network-diagnostics/network-check-source: failed to apply / update (/v1, Kind=Service) openshift-network-diagnostics/network-check-source: Patch "https://api-int.ci-op-cz5dfrrq-706e2.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-network-diagnostics/services/network-check-source?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.245.153:59952->10.0.223.54:6443: read: connection reset by peer
Apr 19 03:46:57.769 - 37s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (/v1, Kind=Service) openshift-network-diagnostics/network-check-source: failed to apply / update (/v1, Kind=Service) openshift-network-diagnostics/network-check-source: Patch "https://api-int.ci-op-cz5dfrrq-706e2.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-network-diagnostics/services/network-check-source?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.245.153:59952->10.0.223.54:6443: read: connection reset by peer
Apr 19 03:48:54.875 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-bcd494666-fp4f8 node/ip-10-0-245-153.us-west-2.compute.internal uid/ab5e91f8-66d6-4e35-9685-82956a0548e1 container/kube-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#2045695807922900992junit39 hours ago
Apr 19 03:46:57.769 - 37s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (/v1, Kind=Service) openshift-network-diagnostics/network-check-source: failed to apply / update (/v1, Kind=Service) openshift-network-diagnostics/network-check-source: Patch "https://api-int.ci-op-cz5dfrrq-706e2.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-network-diagnostics/services/network-check-source?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.245.153:59952->10.0.223.54:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.18-e2e-aws-ovn-serial (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046219349865271296junit3 hours ago
2026-04-20T15:57:03Z node/ip-10-0-85-155.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s0rbm2wr-0acb5.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-85-155.us-west-2.compute.internal?timeout=10s - read tcp 10.0.85.155:56846->10.0.121.249:6443: read: connection reset by peer
2026-04-20T16:01:45Z node/ip-10-0-54-108.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s0rbm2wr-0acb5.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-54-108.us-west-2.compute.internal?timeout=10s - read tcp 10.0.54.108:45918->10.0.58.142:6443: read: connection reset by peer
2026-04-20T16:01:50Z node/ip-10-0-85-155.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s0rbm2wr-0acb5.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-85-155.us-west-2.compute.internal?timeout=10s - read tcp 10.0.85.155:34418->10.0.121.249:6443: read: connection reset by peer
2026-04-20T16:05:51Z node/ip-10-0-69-236.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s0rbm2wr-0acb5.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-69-236.us-west-2.compute.internal?timeout=10s - read tcp 10.0.69.236:35840->10.0.58.142:6443: read: connection reset by peer
2026-04-20T16:06:23Z node/ip-10-0-127-147.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s0rbm2wr-0acb5.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-147.us-west-2.compute.internal?timeout=10s - read tcp 10.0.127.147:40904->10.0.121.249:6443: read: connection reset by peer
2026-04-20T16:09:44Z node/ip-10-0-54-108.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s0rbm2wr-0acb5.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-54-108.us-west-2.compute.internal?timeout=10s - read tcp 10.0.54.108:55602->10.0.121.249:6443: read: connection reset by peer
2026-04-20T16:09:47Z node/ip-10-0-127-147.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s0rbm2wr-0acb5.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-147.us-west-2.compute.internal?timeout=10s - read tcp 10.0.127.147:51570->10.0.121.249:6443: read: connection reset by peer
2026-04-20T16:23:45Z node/ip-10-0-127-147.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-s0rbm2wr-0acb5.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-147.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-ci-5.0-e2e-vsphere-ovn-upgrade (all) - 7 runs, 0% failed, 100% of runs match
#2046259081219411968junit3 hours ago
2026-04-20T17:16:11Z node/ci-op-4dp8f6lb-fcc7e-qzmmt-worker-0-ns669 - reason/FailedToUpdateLease https://api-int.ci-op-4dp8f6lb-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-4dp8f6lb-fcc7e-qzmmt-worker-0-ns669?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T17:47:14Z node/ci-op-4dp8f6lb-fcc7e-qzmmt-worker-0-ns669 - reason/FailedToUpdateLease https://api-int.ci-op-4dp8f6lb-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-4dp8f6lb-fcc7e-qzmmt-worker-0-ns669?timeout=10s - read tcp 10.93.157.50:39682->10.93.157.8:6443: read: connection reset by peer
2026-04-20T17:47:14Z node/ci-op-4dp8f6lb-fcc7e-qzmmt-worker-0-nbl84 - reason/FailedToUpdateLease https://api-int.ci-op-4dp8f6lb-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-4dp8f6lb-fcc7e-qzmmt-worker-0-nbl84?timeout=10s - read tcp 10.93.157.49:38228->10.93.157.8:6443: read: connection reset by peer
2026-04-20T17:47:22Z node/ci-op-4dp8f6lb-fcc7e-qzmmt-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-4dp8f6lb-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-4dp8f6lb-fcc7e-qzmmt-master-0?timeout=10s - context deadline exceeded
#2046142569397620736junit11 hours ago
2026-04-20T09:45:07Z node/ci-op-qxshnrbt-fcc7e-bkk66-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-qxshnrbt-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-qxshnrbt-fcc7e-bkk66-master-1?timeout=10s - read tcp 10.93.157.28:50572->10.93.157.10:6443: read: connection reset by peer
2026-04-20T09:45:08Z node/ci-op-qxshnrbt-fcc7e-bkk66-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-qxshnrbt-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-qxshnrbt-fcc7e-bkk66-master-0?timeout=10s - read tcp 10.93.157.27:49804->10.93.157.10:6443: read: connection reset by peer
2026-04-20T09:51:24Z node/ci-op-qxshnrbt-fcc7e-bkk66-worker-0-9lt4x - reason/FailedToUpdateLease https://api-int.ci-op-qxshnrbt-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-qxshnrbt-fcc7e-bkk66-worker-0-9lt4x?timeout=10s - read tcp 10.93.157.50:33580->10.93.157.10:6443: read: connection reset by peer
2026-04-20T09:57:42Z node/ci-op-qxshnrbt-fcc7e-bkk66-worker-0-9lt4x - reason/FailedToUpdateLease https://api-int.ci-op-qxshnrbt-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-qxshnrbt-fcc7e-bkk66-worker-0-9lt4x?timeout=10s - read tcp 10.93.157.50:45462->10.93.157.10:6443: read: connection reset by peer
2026-04-20T09:57:44Z node/ci-op-qxshnrbt-fcc7e-bkk66-worker-0-wvvgw - reason/FailedToUpdateLease https://api-int.ci-op-qxshnrbt-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-qxshnrbt-fcc7e-bkk66-worker-0-wvvgw?timeout=10s - read tcp 10.93.157.52:42232->10.93.157.10:6443: read: connection reset by peer
2026-04-20T09:57:45Z node/ci-op-qxshnrbt-fcc7e-bkk66-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-qxshnrbt-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-qxshnrbt-fcc7e-bkk66-master-1?timeout=10s - read tcp 10.93.157.28:55690->10.93.157.10:6443: read: connection reset by peer
2026-04-20T09:57:48Z node/ci-op-qxshnrbt-fcc7e-bkk66-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-qxshnrbt-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-qxshnrbt-fcc7e-bkk66-master-2?timeout=10s - write tcp 10.93.157.29:56832->10.93.157.10:6443: write: broken pipe
#2046041590950006784junit18 hours ago
2026-04-20T03:13:58Z node/ci-op-kizznt35-fcc7e-gsl9k-worker-0-tjhwl - reason/FailedToUpdateLease https://api-int.ci-op-kizznt35-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-kizznt35-fcc7e-gsl9k-worker-0-tjhwl?timeout=10s - read tcp 10.93.152.31:33352->10.93.152.8:6443: read: connection reset by peer
2026-04-20T03:13:59Z node/ci-op-kizznt35-fcc7e-gsl9k-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-kizznt35-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-kizznt35-fcc7e-gsl9k-master-1?timeout=10s - read tcp 10.93.152.24:45638->10.93.152.8:6443: read: connection reset by peer
2026-04-20T03:14:02Z node/ci-op-kizznt35-fcc7e-gsl9k-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-kizznt35-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-kizznt35-fcc7e-gsl9k-master-0?timeout=10s - write tcp 10.93.152.25:57956->10.93.152.8:6443: write: connection reset by peer
#2045943315001511936junit24 hours ago
2026-04-19T20:42:03Z node/ci-op-t6m46bny-fcc7e-kq2kg-worker-0-7x5lf - reason/FailedToUpdateLease https://api-int.ci-op-t6m46bny-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-t6m46bny-fcc7e-kq2kg-worker-0-7x5lf?timeout=10s - read tcp 10.95.160.97:59488->10.95.160.12:6443: read: connection reset by peer
#2045857794577403904junit30 hours ago
2026-04-19T14:30:12Z node/ci-op-t3783q8t-fcc7e-qtdxg-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-t3783q8t-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-t3783q8t-fcc7e-qtdxg-master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T14:58:07Z node/ci-op-t3783q8t-fcc7e-qtdxg-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-t3783q8t-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-t3783q8t-fcc7e-qtdxg-master-2?timeout=10s - read tcp 10.93.251.98:35392->10.93.251.14:6443: read: connection reset by peer
2026-04-19T14:58:09Z node/ci-op-t3783q8t-fcc7e-qtdxg-worker-0-5rh5x - reason/FailedToUpdateLease https://api-int.ci-op-t3783q8t-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-t3783q8t-fcc7e-qtdxg-worker-0-5rh5x?timeout=10s - read tcp 10.93.251.102:57876->10.93.251.14:6443: read: connection reset by peer
2026-04-19T14:58:11Z node/ci-op-t3783q8t-fcc7e-qtdxg-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-t3783q8t-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-t3783q8t-fcc7e-qtdxg-master-0?timeout=10s - read tcp 10.93.251.96:52506->10.93.251.14:6443: read: connection reset by peer
2026-04-19T14:58:12Z node/ci-op-t3783q8t-fcc7e-qtdxg-worker-0-dmdgp - reason/FailedToUpdateLease https://api-int.ci-op-t3783q8t-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-t3783q8t-fcc7e-qtdxg-worker-0-dmdgp?timeout=10s - read tcp 10.93.251.101:35684->10.93.251.14:6443: read: connection reset by peer
2026-04-19T15:04:08Z node/ci-op-t3783q8t-fcc7e-qtdxg-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-t3783q8t-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-t3783q8t-fcc7e-qtdxg-master-0?timeout=10s - read tcp 10.93.251.96:36694->10.93.251.14:6443: read: connection reset by peer
2026-04-19T15:04:08Z node/ci-op-t3783q8t-fcc7e-qtdxg-worker-0-5rh5x - reason/FailedToUpdateLease https://api-int.ci-op-t3783q8t-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-t3783q8t-fcc7e-qtdxg-worker-0-5rh5x?timeout=10s - read tcp 10.93.251.102:55534->10.93.251.14:6443: read: connection reset by peer
2026-04-19T15:04:09Z node/ci-op-t3783q8t-fcc7e-qtdxg-worker-0-dmdgp - reason/FailedToUpdateLease https://api-int.ci-op-t3783q8t-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-t3783q8t-fcc7e-qtdxg-worker-0-dmdgp?timeout=10s - read tcp 10.93.251.101:55620->10.93.251.14:6443: read: connection reset by peer
2026-04-19T15:04:13Z node/ci-op-t3783q8t-fcc7e-qtdxg-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-t3783q8t-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-t3783q8t-fcc7e-qtdxg-master-1?timeout=10s - read tcp 10.93.251.97:55986->10.93.251.14:6443: read: connection reset by peer
#2045766818215235584junit36 hours ago
2026-04-19T09:06:03Z node/ci-op-07pzfdif-fcc7e-xx2p2-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-07pzfdif-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-07pzfdif-fcc7e-xx2p2-master-1?timeout=10s - read tcp 10.95.160.22:32810->10.95.160.14:6443: read: connection reset by peer
2026-04-19T09:06:06Z node/ci-op-07pzfdif-fcc7e-xx2p2-worker-0-5k8zf - reason/FailedToUpdateLease https://api-int.ci-op-07pzfdif-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-07pzfdif-fcc7e-xx2p2-worker-0-5k8zf?timeout=10s - read tcp 10.95.160.39:54056->10.95.160.14:6443: read: connection reset by peer
2026-04-19T09:06:07Z node/ci-op-07pzfdif-fcc7e-xx2p2-worker-0-4flxn - reason/FailedToUpdateLease https://api-int.ci-op-07pzfdif-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-07pzfdif-fcc7e-xx2p2-worker-0-4flxn?timeout=10s - read tcp 10.95.160.42:42262->10.95.160.14:6443: read: connection reset by peer
#2045668299492036608junit43 hours ago
2026-04-19T02:36:16Z node/ci-op-k27mxkx8-fcc7e-hrgdv-worker-0-w8d96 - reason/FailedToUpdateLease https://api-int.ci-op-k27mxkx8-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-k27mxkx8-fcc7e-hrgdv-worker-0-w8d96?timeout=10s - read tcp 10.221.127.25:51276->10.221.127.4:6443: read: connection reset by peer
2026-04-19T02:36:17Z node/ci-op-k27mxkx8-fcc7e-hrgdv-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-k27mxkx8-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-k27mxkx8-fcc7e-hrgdv-master-0?timeout=10s - read tcp 10.221.127.56:56836->10.221.127.4:6443: read: connection reset by peer
2026-04-19T02:36:23Z node/ci-op-k27mxkx8-fcc7e-hrgdv-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-k27mxkx8-fcc7e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-k27mxkx8-fcc7e-hrgdv-master-1?timeout=10s - read tcp 10.221.127.71:51160->10.221.127.4:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.15-upgrade-from-stable-4.14-e2e-aws-ovn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#2046233469670920192junit3 hours ago
2026-04-20T15:21:54Z node/ip-10-0-75-40.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5himyb9k-bab18.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-75-40.us-west-1.compute.internal?timeout=10s - read tcp 10.0.75.40:42922->10.0.38.129:6443: read: connection reset by peer
2026-04-20T16:12:20Z node/ip-10-0-41-59.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5himyb9k-bab18.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-41-59.us-west-1.compute.internal?timeout=10s - read tcp 10.0.41.59:58900->10.0.114.247:6443: read: connection reset by peer
#2046233469670920192junit3 hours ago
2026-04-20T15:21:54Z node/ip-10-0-75-40.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5himyb9k-bab18.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-75-40.us-west-1.compute.internal?timeout=10s - read tcp 10.0.75.40:42922->10.0.38.129:6443: read: connection reset by peer
2026-04-20T16:12:20Z node/ip-10-0-41-59.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5himyb9k-bab18.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-41-59.us-west-1.compute.internal?timeout=10s - read tcp 10.0.41.59:58900->10.0.114.247:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.17-upgrade-from-stable-4.16-e2e-aws-ovn-upgrade (all) - 4 runs, 50% failed, 200% of failures match = 100% impact
#2046231585480511488junit3 hours ago
2026-04-20T15:23:54Z node/ip-10-0-125-106.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wj03shzd-c2294.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-125-106.ec2.internal?timeout=10s - read tcp 10.0.125.106:54794->10.0.126.20:6443: read: connection reset by peer
2026-04-20T15:23:57Z node/ip-10-0-71-237.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wj03shzd-c2294.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-237.ec2.internal?timeout=10s - read tcp 10.0.71.237:52772->10.0.126.20:6443: read: connection reset by peer
#2046231585480511488junit3 hours ago
E0420 14:48:42.362860       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-wj03shzd-c2294.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0420 14:55:13.797499       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-wj03shzd-c2294.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": context deadline exceeded - error from a previous attempt: read tcp 10.0.47.45:38576->10.0.126.20:6443: read: connection reset by peer
E0420 14:57:01.120230       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io cloud-controller-manager)
#2046231585480511488junit3 hours ago
2026-04-20T15:23:54Z node/ip-10-0-125-106.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wj03shzd-c2294.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-125-106.ec2.internal?timeout=10s - read tcp 10.0.125.106:54794->10.0.126.20:6443: read: connection reset by peer
2026-04-20T15:23:57Z node/ip-10-0-71-237.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wj03shzd-c2294.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-237.ec2.internal?timeout=10s - read tcp 10.0.71.237:52772->10.0.126.20:6443: read: connection reset by peer
#2045778443890593792junit34 hours ago
2026-04-19T09:17:49Z node/ip-10-0-97-226.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-wwxdgmd2-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-97-226.us-west-2.compute.internal?timeout=10s - read tcp 10.0.97.226:41262->10.0.114.138:6443: read: connection reset by peer
2026-04-19T09:17:57Z node/ip-10-0-35-100.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-wwxdgmd2-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-100.us-west-2.compute.internal?timeout=10s - read tcp 10.0.35.100:37142->10.0.26.176:6443: read: connection reset by peer
2026-04-19T09:21:50Z node/ip-10-0-64-178.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-wwxdgmd2-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-64-178.us-west-2.compute.internal?timeout=10s - read tcp 10.0.64.178:34796->10.0.114.138:6443: read: connection reset by peer
2026-04-19T09:21:53Z node/ip-10-0-82-127.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-wwxdgmd2-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-82-127.us-west-2.compute.internal?timeout=10s - read tcp 10.0.82.127:41648->10.0.114.138:6443: read: connection reset by peer
#2045778443890593792junit34 hours ago
2026-04-19T09:17:49Z node/ip-10-0-97-226.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-wwxdgmd2-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-97-226.us-west-2.compute.internal?timeout=10s - read tcp 10.0.97.226:41262->10.0.114.138:6443: read: connection reset by peer
2026-04-19T09:17:57Z node/ip-10-0-35-100.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-wwxdgmd2-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-100.us-west-2.compute.internal?timeout=10s - read tcp 10.0.35.100:37142->10.0.26.176:6443: read: connection reset by peer
2026-04-19T09:21:50Z node/ip-10-0-64-178.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-wwxdgmd2-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-64-178.us-west-2.compute.internal?timeout=10s - read tcp 10.0.64.178:34796->10.0.114.138:6443: read: connection reset by peer
2026-04-19T09:21:53Z node/ip-10-0-82-127.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-wwxdgmd2-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-82-127.us-west-2.compute.internal?timeout=10s - read tcp 10.0.82.127:41648->10.0.114.138:6443: read: connection reset by peer
#2045695858711728128junit39 hours ago
2026-04-19T03:47:11Z node/ip-10-0-69-122.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ptb9pb7z-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-69-122.ec2.internal?timeout=10s - read tcp 10.0.69.122:45666->10.0.1.19:6443: read: connection reset by peer
2026-04-19T04:44:11Z node/ip-10-0-64-82.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ptb9pb7z-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-64-82.ec2.internal?timeout=10s - read tcp 10.0.64.82:36874->10.0.120.30:6443: read: connection reset by peer
#2045695858711728128junit39 hours ago
2026-04-19T03:47:11Z node/ip-10-0-69-122.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ptb9pb7z-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-69-122.ec2.internal?timeout=10s - read tcp 10.0.69.122:45666->10.0.1.19:6443: read: connection reset by peer
2026-04-19T04:44:11Z node/ip-10-0-64-82.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ptb9pb7z-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-64-82.ec2.internal?timeout=10s - read tcp 10.0.64.82:36874->10.0.120.30:6443: read: connection reset by peer
#2045643781310517248junit43 hours ago
2026-04-19T00:32:10Z node/ip-10-0-30-43.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-szmzwqgt-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-43.us-west-1.compute.internal?timeout=10s - read tcp 10.0.30.43:39508->10.0.42.28:6443: read: connection reset by peer
2026-04-19T01:09:07Z node/ip-10-0-21-215.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-szmzwqgt-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-21-215.us-west-1.compute.internal?timeout=10s - read tcp 10.0.21.215:50274->10.0.42.28:6443: read: connection reset by peer
2026-04-19T01:09:10Z node/ip-10-0-89-54.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-szmzwqgt-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-54.us-west-1.compute.internal?timeout=10s - read tcp 10.0.89.54:34726->10.0.42.28:6443: read: connection reset by peer
2026-04-19T01:16:40Z node/ip-10-0-89-54.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-szmzwqgt-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-54.us-west-1.compute.internal?timeout=10s - read tcp 10.0.89.54:57696->10.0.113.29:6443: read: connection reset by peer
2026-04-19T01:23:31Z node/ip-10-0-16-220.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-szmzwqgt-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-220.us-west-1.compute.internal?timeout=10s - read tcp 10.0.16.220:38782->10.0.113.29:6443: read: connection reset by peer
#2045643781310517248junit43 hours ago
2026-04-19T00:32:10Z node/ip-10-0-30-43.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-szmzwqgt-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-43.us-west-1.compute.internal?timeout=10s - read tcp 10.0.30.43:39508->10.0.42.28:6443: read: connection reset by peer
2026-04-19T01:09:07Z node/ip-10-0-21-215.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-szmzwqgt-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-21-215.us-west-1.compute.internal?timeout=10s - read tcp 10.0.21.215:50274->10.0.42.28:6443: read: connection reset by peer
2026-04-19T01:09:10Z node/ip-10-0-89-54.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-szmzwqgt-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-54.us-west-1.compute.internal?timeout=10s - read tcp 10.0.89.54:34726->10.0.42.28:6443: read: connection reset by peer
2026-04-19T01:16:40Z node/ip-10-0-89-54.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-szmzwqgt-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-54.us-west-1.compute.internal?timeout=10s - read tcp 10.0.89.54:57696->10.0.113.29:6443: read: connection reset by peer
2026-04-19T01:23:31Z node/ip-10-0-16-220.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-szmzwqgt-c2294.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-220.us-west-1.compute.internal?timeout=10s - read tcp 10.0.16.220:38782->10.0.113.29:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-5.0-e2e-metal-ovn-two-node-fencing-ipv6 (all) - 4 runs, 25% failed, 100% of failures match = 25% impact
#2046218197081788416junit3 hours ago
2026-04-20T15:12:36Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
2026-04-20T15:12:42Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::14]:47072->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-20T15:12:42Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
#2046218197081788416junit3 hours ago
2026-04-20T15:12:42Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
2026-04-20T15:12:42Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::15]:45202->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-20T15:12:42Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::15]:45210->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-20T15:12:42Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
periodic-ci-openshift-release-main-ci-4.16-upgrade-from-stable-4.15-e2e-aws-ovn-upgrade (all) - 4 runs, 100% failed, 100% of failures match = 100% impact
#2046233260349984768junit3 hours ago
2026-04-20T16:10:55Z node/ip-10-0-127-30.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3pgingjk-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-30.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T16:15:12Z node/ip-10-0-51-56.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3pgingjk-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-56.us-west-1.compute.internal?timeout=10s - read tcp 10.0.51.56:36730->10.0.95.179:6443: read: connection reset by peer
#2046233260349984768junit3 hours ago
Apr 20 16:15:33.991 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group: Patch "https://api-int.ci-op-3pgingjk-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/multus-group?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.67.206:53352->10.0.20.210:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 20 16:15:33.991 - 24s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group: Patch "https://api-int.ci-op-3pgingjk-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/multus-group?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.67.206:53352->10.0.20.210:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 20 16:15:58.192 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2046233260349984768junit3 hours ago
2026-04-20T16:10:55Z node/ip-10-0-127-30.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3pgingjk-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-30.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T16:15:12Z node/ip-10-0-51-56.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3pgingjk-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-56.us-west-1.compute.internal?timeout=10s - read tcp 10.0.51.56:36730->10.0.95.179:6443: read: connection reset by peer
#2045680749692063744junit40 hours ago
2026-04-19T02:40:30Z node/ip-10-0-19-191.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3w838wfq-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-19-191.ec2.internal?timeout=10s - read tcp 10.0.19.191:46020->10.0.14.171:6443: read: connection reset by peer
2026-04-19T02:40:50Z node/ip-10-0-31-37.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3w838wfq-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-31-37.ec2.internal?timeout=10s - read tcp 10.0.31.37:41904->10.0.14.171:6443: read: connection reset by peer
2026-04-19T02:44:26Z node/ip-10-0-19-191.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3w838wfq-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-19-191.ec2.internal?timeout=10s - read tcp 10.0.19.191:49192->10.0.95.42:6443: read: connection reset by peer
2026-04-19T03:05:28Z node/ip-10-0-35-140.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3w838wfq-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-140.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T03:32:04Z node/ip-10-0-79-9.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3w838wfq-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-79-9.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T03:38:18Z node/ip-10-0-35-140.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3w838wfq-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-140.ec2.internal?timeout=10s - read tcp 10.0.35.140:53314->10.0.95.42:6443: read: connection reset by peer
#2045680749692063744junit40 hours ago
2026-04-19T02:40:30Z node/ip-10-0-19-191.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3w838wfq-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-19-191.ec2.internal?timeout=10s - read tcp 10.0.19.191:46020->10.0.14.171:6443: read: connection reset by peer
2026-04-19T02:40:50Z node/ip-10-0-31-37.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3w838wfq-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-31-37.ec2.internal?timeout=10s - read tcp 10.0.31.37:41904->10.0.14.171:6443: read: connection reset by peer
2026-04-19T02:44:26Z node/ip-10-0-19-191.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3w838wfq-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-19-191.ec2.internal?timeout=10s - read tcp 10.0.19.191:49192->10.0.95.42:6443: read: connection reset by peer
2026-04-19T03:05:28Z node/ip-10-0-35-140.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3w838wfq-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-140.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045626234003525632junit44 hours ago
2026-04-18T23:18:05Z node/ip-10-0-25-185.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-nq19vr7c-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-185.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-18T23:18:52Z node/ip-10-0-100-113.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-nq19vr7c-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-113.us-west-2.compute.internal?timeout=10s - read tcp 10.0.100.113:52244->10.0.108.21:6443: read: connection reset by peer
2026-04-18T23:18:59Z node/ip-10-0-25-185.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-nq19vr7c-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-185.us-west-2.compute.internal?timeout=10s - read tcp 10.0.25.185:37138->10.0.52.56:6443: read: connection reset by peer
2026-04-18T23:23:09Z node/ip-10-0-16-164.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-nq19vr7c-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-164.us-west-2.compute.internal?timeout=10s - read tcp 10.0.16.164:34960->10.0.52.56:6443: read: connection reset by peer
2026-04-18T23:39:36Z node/ip-10-0-25-185.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-nq19vr7c-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-185.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045626234003525632junit44 hours ago
2026-04-18T23:40:45Z node/ip-10-0-16-164.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-nq19vr7c-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-164.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T00:09:26Z node/ip-10-0-62-218.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-nq19vr7c-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-218.us-west-2.compute.internal?timeout=10s - read tcp 10.0.62.218:48048->10.0.52.56:6443: read: connection reset by peer
#2045626234003525632junit44 hours ago
Apr 18 23:40:46.665 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
Apr 19 00:16:44.525 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script: failed to apply / update (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script: Patch "https://api-int.ci-op-nq19vr7c-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-network-operator/configmaps/iptables-alerter-script?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.16.164:35244->10.0.52.56:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 19 00:16:44.525 - 24s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script: failed to apply / update (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script: Patch "https://api-int.ci-op-nq19vr7c-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-network-operator/configmaps/iptables-alerter-script?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.16.164:35244->10.0.52.56:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 19 00:17:08.833 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2045571106143735808junit47 hours ago
2026-04-18T19:25:57Z node/ip-10-0-62-185.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-v36w3hjl-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-185.us-west-1.compute.internal?timeout=10s - read tcp 10.0.62.185:46644->10.0.73.174:6443: read: connection reset by peer
2026-04-18T19:52:30Z node/ip-10-0-20-243.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-v36w3hjl-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-20-243.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045571106143735808junit47 hours ago
2026-04-18T19:25:57Z node/ip-10-0-62-185.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-v36w3hjl-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-185.us-west-1.compute.internal?timeout=10s - read tcp 10.0.62.185:46644->10.0.73.174:6443: read: connection reset by peer
2026-04-18T19:52:30Z node/ip-10-0-20-243.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-v36w3hjl-be571.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-20-243.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-multiarch-main-nightly-4.15-upgrade-from-stable-4.14-ocp-e2e-aws-ovn-heterogeneous-upgrade (all) - 2 runs, 100% failed, 100% of failures match = 100% impact
#2046219958211317760junit3 hours ago
2026-04-20T15:00:28Z node/ip-10-0-75-132.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-x5cs7ci9-a9464.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-75-132.ec2.internal?timeout=10s - read tcp 10.0.75.132:41544->10.0.122.57:6443: read: connection reset by peer
2026-04-20T15:17:49Z node/ip-10-0-61-45.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-x5cs7ci9-a9464.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-45.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045857479048302592junit29 hours ago
Apr 19 14:39:39.903 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics: Patch "https://api-int.ci-op-i7i68jfk-a9464.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-network-diagnostics/rolebindings/network-diagnostics?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.20.85:48860->10.0.50.19:6443: read: connection reset by peer
1 tests failed during this blip (2026-04-19 14:39:39.903349964 +0000 UTC m=+415.317994064 to 2026-04-19 14:39:39.903349964 +0000 UTC m=+415.317994064): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 19 14:39:39.903 - 48s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics: Patch "https://api-int.ci-op-i7i68jfk-a9464.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-network-diagnostics/rolebindings/network-diagnostics?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.20.85:48860->10.0.50.19:6443: read: connection reset by peer
1 tests failed during this blip (2026-04-19 14:39:39.903349964 +0000 UTC m=+415.317994064 to 2026-04-19 14:39:39.903349964 +0000 UTC m=+415.317994064): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: We are not worried about Degraded=True blips for update tests yet.)
#2045857479048302592junit29 hours ago
2026-04-19T14:39:33Z node/ip-10-0-20-85.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i7i68jfk-a9464.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-20-85.us-east-2.compute.internal?timeout=10s - read tcp 10.0.20.85:48854->10.0.50.19:6443: read: connection reset by peer
2026-04-19T14:39:44Z node/ip-10-0-29-25.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i7i68jfk-a9464.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-29-25.us-east-2.compute.internal?timeout=10s - read tcp 10.0.29.25:58338->10.0.50.19:6443: read: connection reset by peer
2026-04-19T15:38:40Z node/ip-10-0-34-27.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i7i68jfk-a9464.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-34-27.us-east-2.compute.internal?timeout=10s - read tcp 10.0.34.27:34552->10.0.83.19:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.13-e2e-aws-ovn-upgrade (all) - 3 runs, 0% failed, 33% of runs match
#2046233175885090816junit4 hours ago
Apr 20 15:31:35.228 E ns/openshift-etcd pod/etcd-ip-10-0-150-181.us-west-1.compute.internal node/ip-10-0-150-181.us-west-1.compute.internal uid/63d0697b-eaf2-4bbe-a1d7-0097bd61dab6 container/etcd-metrics reason/ContainerExit code/2 cause/Error  connection error: desc = \"transport: Error while dialing: dial tcp 10.0.150.181:9978: connect: connection refused\""}\n{"level":"info","ts":"2026-04-20T15:17:10.798375Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc00016ca20, TRANSIENT_FAILURE"}\n{"level":"info","ts":"2026-04-20T15:17:21.426666Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 10.0.150.181:9978: connect: connection refused\""}\n{"level":"info","ts":"2026-04-20T15:17:21.426746Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc00016ca20, IDLE"}\n{"level":"info","ts":"2026-04-20T15:17:21.426779Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"}\n{"level":"info","ts":"2026-04-20T15:17:21.426797Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"10.0.150.181:9978\" to connect"}\n{"level":"info","ts":"2026-04-20T15:17:21.426918Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc00016ca20, CONNECTING"}\n{"level":"info","ts":"2026-04-20T15:17:21.436217Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY"}\n{"level":"info","ts":"2026-04-20T15:17:21.436325Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc00016ca20, READY"}\n{"level":"info","ts":"2026-04-20T15:17:21.436395Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[roun
Apr 20 15:33:02.821 E clusteroperator/network condition/Degraded status/True reason/ApplyOperatorConfig changed: Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni: Patch "https://api-int.ci-op-6s17titk-923c0.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-multus/roles/whereabouts-cni?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.251.144:44178->10.0.254.215:6443: read: connection reset by peer
Apr 20 15:33:02.821 - 37s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni: Patch "https://api-int.ci-op-6s17titk-923c0.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-multus/roles/whereabouts-cni?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.251.144:44178->10.0.254.215:6443: read: connection reset by peer
Apr 20 15:34:14.906 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-148-58.us-west-1.compute.internal node/ip-10-0-148-58.us-west-1.compute.internal uid/5f71130d-083e-434b-84a6-f8d323191b56 container/kube-apiserver-cert-syncer reason/ContainerExit code/2 cause/Error rue} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0420 15:26:53.240767       1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nI0420 15:26:53.241950       1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0420 15:26:54.635420       1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nI0420 15:26:54.635811       1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
#2046233175885090816junit4 hours ago
Apr 20 15:33:02.821 - 37s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni: Patch "https://api-int.ci-op-6s17titk-923c0.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-multus/roles/whereabouts-cni?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.251.144:44178->10.0.254.215:6443: read: connection reset by peer
Apr 20 16:27:42.408 - 771ms E clusteroperator/network condition/Degraded status/True reason/DaemonSet "/openshift-ovn-kubernetes/ovnkube-master" rollout is not making progress - pod ovnkube-master-2tcbr is in CrashLoopBackOff State
periodic-ci-openshift-multiarch-main-nightly-4.21-upgrade-from-stable-4.20-ocp-e2e-aws-ovn-upgrade-multi-x-ax (all) - 6 runs, 0% failed, 100% of runs match
#2046232079582105600junit4 hours ago
Apr 20 15:44:13.163 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (admissionregistration.k8s.io/v1, Kind=ValidatingAdmissionPolicyBinding) /user-defined-networks-namespace-label-binding: failed to apply / update (admissionregistration.k8s.io/v1, Kind=ValidatingAdmissionPolicyBinding) /user-defined-networks-namespace-label-binding: Patch "https://api-int.ci-op-kz4fnw4f-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/admissionregistration.k8s.io/v1/validatingadmissionpolicybindings/user-defined-networks-namespace-label-binding?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.38.185:56764->10.0.66.183:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 20 15:44:13.163 - 28s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (admissionregistration.k8s.io/v1, Kind=ValidatingAdmissionPolicyBinding) /user-defined-networks-namespace-label-binding: failed to apply / update (admissionregistration.k8s.io/v1, Kind=ValidatingAdmissionPolicyBinding) /user-defined-networks-namespace-label-binding: Patch "https://api-int.ci-op-kz4fnw4f-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/admissionregistration.k8s.io/v1/validatingadmissionpolicybindings/user-defined-networks-namespace-label-binding?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.38.185:56764->10.0.66.183:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 20 15:44:41.171 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2046232079582105600junit4 hours ago
2026-04-20T15:40:14Z node/ip-10-0-95-7.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kz4fnw4f-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-95-7.us-east-2.compute.internal?timeout=10s - read tcp 10.0.95.7:41546->10.0.66.183:6443: read: connection reset by peer
2026-04-20T16:29:40Z node/ip-10-0-16-17.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kz4fnw4f-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-17.us-east-2.compute.internal?timeout=10s - read tcp 10.0.16.17:47628->10.0.66.183:6443: read: connection reset by peer
2026-04-20T16:37:17Z node/ip-10-0-74-212.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kz4fnw4f-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-74-212.us-east-2.compute.internal?timeout=10s - read tcp 10.0.74.212:50272->10.0.66.183:6443: read: connection reset by peer
2026-04-20T16:37:19Z node/ip-10-0-16-17.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kz4fnw4f-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-17.us-east-2.compute.internal?timeout=10s - read tcp 10.0.16.17:36906->10.0.18.59:6443: read: connection reset by peer
#2046102001351135232junit13 hours ago
2026-04-20T06:56:40Z node/ip-10-0-25-202.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-2nt9rtdp-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-202.ec2.internal?timeout=10s - read tcp 10.0.25.202:34676->10.0.49.166:6443: read: connection reset by peer
2026-04-20T07:00:25Z node/ip-10-0-14-81.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-2nt9rtdp-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-14-81.ec2.internal?timeout=10s - read tcp 10.0.14.81:51814->10.0.103.0:6443: read: connection reset by peer
2026-04-20T07:19:44Z node/ip-10-0-8-192.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-2nt9rtdp-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-192.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046102001351135232junit13 hours ago
2026-04-20T07:51:26Z node/ip-10-0-25-202.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-2nt9rtdp-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-202.ec2.internal?timeout=10s - context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2026-04-20T07:55:37Z node/ip-10-0-14-81.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-2nt9rtdp-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-14-81.ec2.internal?timeout=10s - read tcp 10.0.14.81:39044->10.0.103.0:6443: read: connection reset by peer
#2045988265533640704junit20 hours ago
2026-04-19T23:06:52Z node/ip-10-0-8-151.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zhli547n-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-151.us-west-2.compute.internal?timeout=10s - read tcp 10.0.8.151:37490->10.0.15.60:6443: read: connection reset by peer
2026-04-19T23:06:53Z node/ip-10-0-21-158.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zhli547n-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-21-158.us-west-2.compute.internal?timeout=10s - read tcp 10.0.21.158:36390->10.0.15.60:6443: read: connection reset by peer
2026-04-19T23:33:09Z node/ip-10-0-5-2.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zhli547n-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-2.us-west-2.compute.internal?timeout=10s - read tcp 10.0.5.2:41092->10.0.116.154:6443: read: connection reset by peer
2026-04-19T23:37:08Z node/ip-10-0-21-158.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zhli547n-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-21-158.us-west-2.compute.internal?timeout=10s - read tcp 10.0.21.158:52062->10.0.15.60:6443: read: connection reset by peer
2026-04-19T23:41:30Z node/ip-10-0-57-235.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zhli547n-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-57-235.us-west-2.compute.internal?timeout=10s - read tcp 10.0.57.235:37110->10.0.15.60:6443: read: connection reset by peer
2026-04-19T23:41:30Z node/ip-10-0-5-2.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zhli547n-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-2.us-west-2.compute.internal?timeout=10s - read tcp 10.0.5.2:54504->10.0.15.60:6443: read: connection reset by peer
2026-04-19T23:41:33Z node/ip-10-0-21-158.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zhli547n-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-21-158.us-west-2.compute.internal?timeout=10s - read tcp 10.0.21.158:52358->10.0.15.60:6443: read: connection reset by peer
2026-04-19T23:41:36Z node/ip-10-0-53-192.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zhli547n-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-192.us-west-2.compute.internal?timeout=10s - read tcp 10.0.53.192:37054->10.0.116.154:6443: read: connection reset by peer
2026-04-19T23:59:12Z node/ip-10-0-24-177.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zhli547n-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-177.us-west-2.compute.internal?timeout=10s - context deadline exceeded
#2045882128943550464junit27 hours ago
2026-04-19T16:09:08Z node/ip-10-0-50-107.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-03xdr07n-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-50-107.us-west-1.compute.internal?timeout=10s - read tcp 10.0.50.107:41554->10.0.59.70:6443: read: connection reset by peer
2026-04-19T16:12:40Z node/ip-10-0-40-241.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-03xdr07n-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-40-241.us-west-1.compute.internal?timeout=10s - read tcp 10.0.40.241:46058->10.0.59.70:6443: read: connection reset by peer
2026-04-19T16:12:53Z node/ip-10-0-50-107.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-03xdr07n-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-50-107.us-west-1.compute.internal?timeout=10s - read tcp 10.0.50.107:36736->10.0.59.70:6443: read: connection reset by peer
2026-04-19T16:39:14Z node/ip-10-0-98-183.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-03xdr07n-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-98-183.us-west-1.compute.internal?timeout=10s - read tcp 10.0.98.183:48246->10.0.81.145:6443: read: connection reset by peer
2026-04-19T16:46:54Z node/ip-10-0-50-107.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-03xdr07n-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-50-107.us-west-1.compute.internal?timeout=10s - read tcp 10.0.50.107:37382->10.0.81.145:6443: read: connection reset by peer
2026-04-19T16:47:04Z node/ip-10-0-47-219.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-03xdr07n-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-47-219.us-west-1.compute.internal?timeout=10s - read tcp 10.0.47.219:57032->10.0.59.70:6443: read: connection reset by peer
2026-04-19T17:30:44Z node/ip-10-0-40-241.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-03xdr07n-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-40-241.us-west-1.compute.internal?timeout=10s - read tcp 10.0.40.241:50012->10.0.81.145:6443: read: connection reset by peer
#2045723553524879360junit38 hours ago
2026-04-19T05:49:12Z node/ip-10-0-46-238.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3y4xpj96-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-46-238.us-west-2.compute.internal?timeout=10s - read tcp 10.0.46.238:37576->10.0.19.126:6443: read: connection reset by peer
2026-04-19T05:53:47Z node/ip-10-0-120-12.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3y4xpj96-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-12.us-west-2.compute.internal?timeout=10s - read tcp 10.0.120.12:33176->10.0.108.168:6443: read: connection reset by peer
2026-04-19T05:57:24Z node/ip-10-0-52-44.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3y4xpj96-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-44.us-west-2.compute.internal?timeout=10s - read tcp 10.0.52.44:48154->10.0.19.126:6443: read: connection reset by peer
2026-04-19T05:57:25Z node/ip-10-0-105-137.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3y4xpj96-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-105-137.us-west-2.compute.internal?timeout=10s - read tcp 10.0.105.137:57094->10.0.19.126:6443: read: connection reset by peer
2026-04-19T05:57:41Z node/ip-10-0-20-230.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3y4xpj96-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-20-230.us-west-2.compute.internal?timeout=10s - read tcp 10.0.20.230:53808->10.0.108.168:6443: read: connection reset by peer
2026-04-19T06:40:46Z node/ip-10-0-105-137.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3y4xpj96-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-105-137.us-west-2.compute.internal?timeout=10s - read tcp 10.0.105.137:41852->10.0.108.168:6443: read: connection reset by peer
2026-04-19T06:47:17Z node/ip-10-0-52-44.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3y4xpj96-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-44.us-west-2.compute.internal?timeout=10s - read tcp 10.0.52.44:57420->10.0.108.168:6443: read: connection reset by peer
2026-04-19T06:47:36Z node/ip-10-0-20-230.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3y4xpj96-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-20-230.us-west-2.compute.internal?timeout=10s - read tcp 10.0.20.230:55462->10.0.108.168:6443: read: connection reset by peer
#2045604558457016320junit46 hours ago
2026-04-18T21:30:48Z node/ip-10-0-99-68.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ymb773wg-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-99-68.ec2.internal?timeout=10s - read tcp 10.0.99.68:52598->10.0.115.214:6443: read: connection reset by peer
2026-04-18T21:34:46Z node/ip-10-0-18-244.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ymb773wg-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-18-244.ec2.internal?timeout=10s - read tcp 10.0.18.244:50704->10.0.115.214:6443: read: connection reset by peer
2026-04-18T22:04:45Z node/ip-10-0-1-89.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ymb773wg-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-1-89.ec2.internal?timeout=10s - read tcp 10.0.1.89:45114->10.0.38.115:6443: read: connection reset by peer
2026-04-18T22:08:48Z node/ip-10-0-15-122.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ymb773wg-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-15-122.ec2.internal?timeout=10s - read tcp 10.0.15.122:37236->10.0.115.214:6443: read: connection reset by peer
2026-04-18T22:34:19Z node/ip-10-0-10-230.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ymb773wg-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-10-230.ec2.internal?timeout=10s - context deadline exceeded
2026-04-18T22:48:06Z node/ip-10-0-99-68.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ymb773wg-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-99-68.ec2.internal?timeout=10s - read tcp 10.0.99.68:35178->10.0.115.214:6443: read: connection reset by peer
2026-04-18T22:55:34Z node/ip-10-0-1-89.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ymb773wg-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-1-89.ec2.internal?timeout=10s - read tcp 10.0.1.89:38792->10.0.115.214:6443: read: connection reset by peer
2026-04-18T22:58:43Z node/ip-10-0-11-210.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ymb773wg-20deb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-210.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-ci-4.14-upgrade-from-stable-4.13-e2e-aws-ovn-upgrade (all) - 3 runs, 0% failed, 67% of runs match
#2046230549705527296junit4 hours ago
Apr 20 15:32:37.629 - 50s   E clusteroperator/network condition/Degraded status/True reason/Internal error while updating operator configuration: could not apply (operator.openshift.io/v1, Kind=Network) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again
Apr 20 16:12:24.038 - 47s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (/v1, Kind=Namespace) /openshift-network-diagnostics: failed to apply / update (/v1, Kind=Namespace) /openshift-network-diagnostics: Patch "https://api-int.ci-op-f9rxdncy-b29e1.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-network-diagnostics?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.155.244:33960->10.0.254.46:6443: read: connection reset by peer
#2045631395979595776junit44 hours ago
2026-04-18T23:21:08Z node/ip-10-0-157-238.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-c3tm6s8c-b29e1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-157-238.ec2.internal?timeout=10s - read tcp 10.0.157.238:48524->10.0.152.6:6443: read: connection reset by peer
2026-04-18T23:21:16Z node/ip-10-0-137-30.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-c3tm6s8c-b29e1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-137-30.ec2.internal?timeout=10s - read tcp 10.0.137.30:33630->10.0.152.6:6443: read: connection reset by peer
#2045631395979595776junit44 hours ago
2026-04-18T23:21:08Z node/ip-10-0-157-238.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-c3tm6s8c-b29e1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-157-238.ec2.internal?timeout=10s - read tcp 10.0.157.238:48524->10.0.152.6:6443: read: connection reset by peer
2026-04-18T23:21:16Z node/ip-10-0-137-30.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-c3tm6s8c-b29e1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-137-30.ec2.internal?timeout=10s - read tcp 10.0.137.30:33630->10.0.152.6:6443: read: connection reset by peer
pull-ci-openshift-ovn-kubernetes-master-e2e-aws-ovn-hypershift (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#2046241453021073408junit4 hours ago
found 3 apiserver disruption events near E2E test start (Apr 20 15:26:40) with messages:
Apr 20 15:29:05.000 - 1s    E backend-disruption-name/kube-api-http2-internal-lb-reused-connections connection/reused disruption/kube-api-http2-internal-lb load-balancer/internal-lb protocol/http2 target/kube-api reason/DisruptionBegan request-audit-id/no-audit-id backend-disruption-name/kube-api-http2-internal-lb-reused-connections connection/reused disruption/kube-api-http2-internal-lb load-balancer/internal-lb protocol/http2 target/kube-api stopped responding to GET requests over reused connections: category: NeedsTriage err: connection error: Get "https://ad27ed301cb614b4bb8f3d53d578155f-b621696a797b0177.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/default?resourceVersion=2451&sample-id=26": read tcp 10.0.141.208:52474->98.91.104.190:6443: read: connection reset by peer - sample-id=26 audit-id=e0535935-3c39-44a1-8557-cfb8bd3fb29a conn-reused=true status-code= protocol=HTTP/1.1 roundtrip=1ms retry-after=<none> source=
Apr 20 15:29:05.000 - 1s    E backend-disruption-name/openshift-api-http2-internal-lb-reused-connections connection/reused disruption/openshift-api-http2-internal-lb load-balancer/internal-lb protocol/http2 target/openshift-api reason/DisruptionBegan request-audit-id/no-audit-id backend-disruption-name/openshift-api-http2-internal-lb-reused-connections connection/reused disruption/openshift-api-http2-internal-lb load-balancer/internal-lb protocol/http2 target/openshift-api stopped responding to GET requests over reused connections: category: NeedsTriage err: connection error: Get "https://ad27ed301cb614b4bb8f3d53d578155f-b621696a797b0177.elb.us-east-1.amazonaws.com:6443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/cli?resourceVersion=9495&sample-id=26": read tcp 10.0.141.208:52470->98.91.104.190:6443: read: connection reset by peer - sample-id=26 audit-id=0880ad07-c066-457e-9964-9d1f2e4422c0 conn-reused=true status-code= protocol=HTTP/1.1 roundtrip=1ms retry-after=<none> source=
Apr 20 15:29:05.000 - 1s    E backend-disruption-name/openshift-api-http2-internal-lb-reused-connections connection/reused disruption/openshift-api-http2-internal-lb load-balancer/internal-lb protocol/http2 target/openshift-api reason/DisruptionBegan request-audit-id/no-audit-id backend-disruption-name/openshift-api-http2-internal-lb-reused-connections connection/reused disruption/openshift-api-http2-internal-lb load-balancer/internal-lb protocol/http2 target/openshift-api stopped responding to GET requests over reused connections: category: NeedsTriage err: connection error: Get "https://ad27ed301cb614b4bb8f3d53d578155f-b621696a797b0177.elb.us-east-1.amazonaws.com:6443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/cli?resourceVersion=9495&sample-id=26": read tcp 10.0.142.13:51586->98.91.104.190:6443: read: connection reset by peer - sample-id=26 audit-id=3a9b5803-8d93-4e74-b67c-d4d52e6013a3 conn-reused=true status-code= protocol=HTTP/1.1 roundtrip=7ms retry-after=<none> source=
periodic-ci-openshift-release-main-nightly-4.14-e2e-aws-sdn-upgrade (all) - 6 runs, 67% failed, 125% of failures match = 83% impact
#2046231730565681152junit4 hours ago
2026-04-20T15:20:42Z node/ip-10-0-33-163.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3k8my53x-81ddc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-33-163.us-west-2.compute.internal?timeout=10s - read tcp 10.0.33.163:41674->10.0.92.228:6443: read: connection reset by peer
2026-04-20T15:24:15Z node/ip-10-0-33-163.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3k8my53x-81ddc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-33-163.us-west-2.compute.internal?timeout=10s - read tcp 10.0.33.163:47948->10.0.92.228:6443: read: connection reset by peer
2026-04-20T16:05:11Z node/ip-10-0-16-107.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3k8my53x-81ddc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-107.us-west-2.compute.internal?timeout=10s - read tcp 10.0.16.107:48642->10.0.92.228:6443: read: connection reset by peer
2026-04-20T16:11:06Z node/ip-10-0-89-207.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3k8my53x-81ddc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-207.us-west-2.compute.internal?timeout=10s - read tcp 10.0.89.207:35604->10.0.92.228:6443: read: connection reset by peer
#2046231730565681152junit4 hours ago
Apr 20 15:20:23.992 - 42s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target: failed to apply / update (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target: Patch "https://api-int.ci-op-3k8my53x-81ddc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/apps/v1/namespaces/openshift-network-diagnostics/daemonsets/network-check-target?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.89.207:45430->10.0.45.109:6443: read: connection reset by peer
#2046231730565681152junit4 hours ago
2026-04-20T15:20:42Z node/ip-10-0-33-163.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3k8my53x-81ddc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-33-163.us-west-2.compute.internal?timeout=10s - read tcp 10.0.33.163:41674->10.0.92.228:6443: read: connection reset by peer
2026-04-20T15:24:15Z node/ip-10-0-33-163.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3k8my53x-81ddc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-33-163.us-west-2.compute.internal?timeout=10s - read tcp 10.0.33.163:47948->10.0.92.228:6443: read: connection reset by peer
2026-04-20T16:05:11Z node/ip-10-0-16-107.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3k8my53x-81ddc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-107.us-west-2.compute.internal?timeout=10s - read tcp 10.0.16.107:48642->10.0.92.228:6443: read: connection reset by peer
2026-04-20T16:11:06Z node/ip-10-0-89-207.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3k8my53x-81ddc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-207.us-west-2.compute.internal?timeout=10s - read tcp 10.0.89.207:35604->10.0.92.228:6443: read: connection reset by peer
#2046187266736394240junit7 hours ago
2026-04-20T12:13:49Z node/ip-10-0-18-91.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xnkzp23n-81ddc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-18-91.us-east-2.compute.internal?timeout=10s - read tcp 10.0.18.91:53494->10.0.34.178:6443: read: connection reset by peer
2026-04-20T12:13:55Z node/ip-10-0-91-172.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xnkzp23n-81ddc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-91-172.us-east-2.compute.internal?timeout=10s - read tcp 10.0.91.172:58978->10.0.34.178:6443: read: connection reset by peer
2026-04-20T12:45:58Z node/ip-10-0-18-91.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xnkzp23n-81ddc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-18-91.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T12:47:53Z node/ip-10-0-16-154.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xnkzp23n-81ddc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-154.us-east-2.compute.internal?timeout=10s - read tcp 10.0.16.154:43926->10.0.34.178:6443: read: connection reset by peer
#2046187266736394240junit7 hours ago
2026-04-20T12:13:49Z node/ip-10-0-18-91.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xnkzp23n-81ddc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-18-91.us-east-2.compute.internal?timeout=10s - read tcp 10.0.18.91:53494->10.0.34.178:6443: read: connection reset by peer
2026-04-20T12:13:55Z node/ip-10-0-91-172.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xnkzp23n-81ddc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-91-172.us-east-2.compute.internal?timeout=10s - read tcp 10.0.91.172:58978->10.0.34.178:6443: read: connection reset by peer
2026-04-20T12:45:58Z node/ip-10-0-18-91.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xnkzp23n-81ddc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-18-91.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T12:47:53Z node/ip-10-0-16-154.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xnkzp23n-81ddc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-154.us-east-2.compute.internal?timeout=10s - read tcp 10.0.16.154:43926->10.0.34.178:6443: read: connection reset by peer
#2046106433740607488junit12 hours ago
2026-04-20T06:52:43Z node/ip-10-0-88-228.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-viyxqpjp-81ddc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-88-228.us-west-2.compute.internal?timeout=10s - read tcp 10.0.88.228:45852->10.0.42.154:6443: read: connection reset by peer
2026-04-20T06:56:37Z node/ip-10-0-29-93.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-viyxqpjp-81ddc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-29-93.us-west-2.compute.internal?timeout=10s - read tcp 10.0.29.93:39704->10.0.42.154:6443: read: connection reset by peer
2026-04-20T07:36:16Z node/ip-10-0-60-82.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-viyxqpjp-81ddc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-82.us-west-2.compute.internal?timeout=10s - read tcp 10.0.60.82:50308->10.0.65.236:6443: read: connection reset by peer
#2046106433740607488junit12 hours ago
Apr 20 07:36:34.914 - 41s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config: failed to apply / update (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config: Patch "https://api-int.ci-op-viyxqpjp-81ddc.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-multus/configmaps/multus-daemon-config?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.37.76:60172->10.0.42.154:6443: read: connection reset by peer
#2046106433740607488junit12 hours ago
2026-04-20T06:52:43Z node/ip-10-0-88-228.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-viyxqpjp-81ddc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-88-228.us-west-2.compute.internal?timeout=10s - read tcp 10.0.88.228:45852->10.0.42.154:6443: read: connection reset by peer
2026-04-20T06:56:37Z node/ip-10-0-29-93.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-viyxqpjp-81ddc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-29-93.us-west-2.compute.internal?timeout=10s - read tcp 10.0.29.93:39704->10.0.42.154:6443: read: connection reset by peer
2026-04-20T07:36:16Z node/ip-10-0-60-82.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-viyxqpjp-81ddc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-82.us-west-2.compute.internal?timeout=10s - read tcp 10.0.60.82:50308->10.0.65.236:6443: read: connection reset by peer
#2045954098452238336junit23 hours ago
2026-04-19T20:44:12Z node/ip-10-0-39-52.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qhx05kbw-81ddc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-52.ec2.internal?timeout=10s - read tcp 10.0.39.52:56922->10.0.34.222:6443: read: connection reset by peer
2026-04-19T20:44:12Z node/ip-10-0-114-227.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qhx05kbw-81ddc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-114-227.ec2.internal?timeout=10s - read tcp 10.0.114.227:34546->10.0.34.222:6443: read: connection reset by peer
2026-04-19T20:44:23Z node/ip-10-0-89-76.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qhx05kbw-81ddc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-76.ec2.internal?timeout=10s - read tcp 10.0.89.76:34166->10.0.100.223:6443: read: connection reset by peer
#2045954098452238336junit23 hours ago
2026-04-19T20:44:12Z node/ip-10-0-39-52.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qhx05kbw-81ddc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-52.ec2.internal?timeout=10s - read tcp 10.0.39.52:56922->10.0.34.222:6443: read: connection reset by peer
2026-04-19T20:44:12Z node/ip-10-0-114-227.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qhx05kbw-81ddc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-114-227.ec2.internal?timeout=10s - read tcp 10.0.114.227:34546->10.0.34.222:6443: read: connection reset by peer
2026-04-19T20:44:23Z node/ip-10-0-89-76.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qhx05kbw-81ddc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-76.ec2.internal?timeout=10s - read tcp 10.0.89.76:34166->10.0.100.223:6443: read: connection reset by peer
#2045898806322532352junit26 hours ago
2026-04-19T17:03:07Z node/ip-10-0-89-191.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-y39zdlzc-81ddc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-191.us-west-1.compute.internal?timeout=10s - read tcp 10.0.89.191:59086->10.0.83.15:6443: read: connection reset by peer
#2045898806322532352junit26 hours ago
Apr 19 17:07:14.422 - 42s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2: Patch "https://api-int.ci-op-y39zdlzc-81ddc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-network-node-identity/rolebindings/system:openshift:scc:hostnetwork-v2?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.113.128:45422->10.0.83.15:6443: read: connection reset by peer
1 tests failed during this blip (2026-04-19 17:07:14.422758401 +0000 UTC to 2026-04-19 17:07:14.422758401 +0000 UTC): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial]
#2045898806322532352junit26 hours ago
2026-04-19T16:59:14Z node/ip-10-0-89-191.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-y39zdlzc-81ddc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-191.us-west-1.compute.internal?timeout=10s - read tcp 10.0.89.191:49868->10.0.83.15:6443: read: connection reset by peer
2026-04-19T17:03:07Z node/ip-10-0-89-191.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-y39zdlzc-81ddc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-191.us-west-1.compute.internal?timeout=10s - read tcp 10.0.89.191:59086->10.0.83.15:6443: read: connection reset by peer
periodic-ci-openshift-hypershift-release-4.19-periodics-e2e-aws-ovn-conformance (all) - 12 runs, 58% failed, 29% of failures match = 17% impact
#2046256591132430336junit4 hours ago
    <*url.Error | 0xc001dacd80>:
    Delete "https://ae9836b98bbe24808baffe9169cb2176-2d57f00461250e34.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/e2e-network-segmentation-e2e-9437-blue": read tcp 10.129.38.135:60610->34.225.129.98:6443: read: connection reset by peer
    {
#2046233394286694400junit6 hours ago
    <*url.Error | 0xc002a19500>:
    Delete "https://a1d09b3445a634c6bb605d559482cd09-856fda63e6e87545.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/e2e-network-segmentation-e2e-9221-default": read tcp 10.129.102.158:49004->34.224.4.20:6443: read: connection reset by peer
    {
pull-ci-openshift-cluster-network-operator-release-4.18-e2e-aws-ovn-serial (all) - 1 runs, 0% failed, 100% of runs match
#2046216653011685376junit4 hours ago
2026-04-20T14:20:43Z node/ip-10-0-17-100.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-yjnkb09q-be79a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-17-100.us-west-1.compute.internal?timeout=10s - read tcp 10.0.17.100:33334->10.0.30.238:6443: read: connection reset by peer
2026-04-20T15:53:23Z node/ip-10-0-60-94.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-yjnkb09q-be79a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-94.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T15:56:48Z node/ip-10-0-60-94.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-yjnkb09q-be79a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-94.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T16:22:43Z node/ip-10-0-52-255.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-yjnkb09q-be79a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-255.us-west-1.compute.internal?timeout=10s - read tcp 10.0.52.255:43410->10.0.30.238:6443: read: connection reset by peer
2026-04-20T16:30:37Z node/ip-10-0-17-100.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-yjnkb09q-be79a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-17-100.us-west-1.compute.internal?timeout=10s - read tcp 10.0.17.100:50738->10.0.30.238:6443: read: connection reset by peer
2026-04-20T16:35:28Z node/ip-10-0-52-255.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-yjnkb09q-be79a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-255.us-west-1.compute.internal?timeout=10s - read tcp 10.0.52.255:56068->10.0.30.238:6443: read: connection reset by peer
2026-04-20T16:39:27Z node/ip-10-0-17-100.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-yjnkb09q-be79a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-17-100.us-west-1.compute.internal?timeout=10s - read tcp 10.0.17.100:35042->10.0.30.238:6443: read: connection reset by peer
#2046216653011685376junit4 hours ago
Apr 20 16:35:33.027 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s: Patch "https://api-int.ci-op-yjnkb09q-be79a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-ovn-kubernetes/rolebindings/prometheus-k8s?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.34.138:60316->10.0.30.238:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 20 16:35:33.027 - 28s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s: Patch "https://api-int.ci-op-yjnkb09q-be79a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-ovn-kubernetes/rolebindings/prometheus-k8s?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.34.138:60316->10.0.30.238:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 20 16:36:01.234 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
pull-ci-openshift-cluster-monitoring-operator-main-e2e-hypershift-conformance (all) - 14 runs, 29% failed, 50% of failures match = 14% impact
#2046237272579248128junit4 hours ago
# [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
fail [k8s.io/kubernetes/test/e2e/framework/framework.go:396]: Couldn't delete ns: "e2e-watch-1105": Delete "https://afc26dc0c0ddd4decb7a639144e7f7c0-f9a432b5c41fdfcb.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/e2e-watch-1105": read tcp 172.24.132.212:52152->3.211.226.52:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://afc26dc0c0ddd4decb7a639144e7f7c0-f9a432b5c41fdfcb.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/e2e-watch-1105", Err:(*net.OpError)(0xc000d3b9f0)})
#2046028555736846336junit18 hours ago
# [sig-network][Feature:EgressFirewall] egressFirewall should have no impact outside its namespace [Suite:openshift/conformance/parallel]
fail [k8s.io/kubernetes@v1.35.1/test/e2e/framework/framework.go:396]: Couldn't delete ns: "e2e-test-no-egress-firewall-e2e-bhhg5": Delete "https://a17ec04c2fb834f65b72c365c04364e1-8ac0782131e2c876.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/e2e-test-no-egress-firewall-e2e-bhhg5": read tcp 172.24.11.216:40170->13.216.154.100:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://a17ec04c2fb834f65b72c365c04364e1-8ac0782131e2c876.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/e2e-test-no-egress-firewall-e2e-bhhg5", Err:(*net.OpError)(0xc000cf1c20)})
pull-ci-openshift-cluster-network-operator-release-4.18-4.18-upgrade-from-stable-4.17-e2e-aws-ovn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#2046216652764221440junit4 hours ago
2026-04-20T14:48:35Z node/ip-10-0-28-42.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i30yhxi7-948b4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-28-42.us-west-1.compute.internal?timeout=10s - read tcp 10.0.28.42:36002->10.0.50.17:6443: read: connection reset by peer
2026-04-20T14:48:38Z node/ip-10-0-58-8.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i30yhxi7-948b4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-8.us-west-1.compute.internal?timeout=10s - read tcp 10.0.58.8:44296->10.0.50.17:6443: read: connection reset by peer
2026-04-20T14:48:38Z node/ip-10-0-22-7.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i30yhxi7-948b4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-7.us-west-1.compute.internal?timeout=10s - read tcp 10.0.22.7:45636->10.0.50.17:6443: read: connection reset by peer
2026-04-20T14:52:44Z node/ip-10-0-37-51.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i30yhxi7-948b4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-37-51.us-west-1.compute.internal?timeout=10s - read tcp 10.0.37.51:47100->10.0.50.17:6443: read: connection reset by peer
2026-04-20T15:48:38Z node/ip-10-0-56-21.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i30yhxi7-948b4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-56-21.us-west-1.compute.internal?timeout=10s - read tcp 10.0.56.21:46652->10.0.50.17:6443: read: connection reset by peer
#2046216652764221440junit4 hours ago
2026-04-20T14:48:35Z node/ip-10-0-28-42.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i30yhxi7-948b4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-28-42.us-west-1.compute.internal?timeout=10s - read tcp 10.0.28.42:36002->10.0.50.17:6443: read: connection reset by peer
2026-04-20T14:48:38Z node/ip-10-0-58-8.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i30yhxi7-948b4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-8.us-west-1.compute.internal?timeout=10s - read tcp 10.0.58.8:44296->10.0.50.17:6443: read: connection reset by peer
2026-04-20T14:48:38Z node/ip-10-0-22-7.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i30yhxi7-948b4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-7.us-west-1.compute.internal?timeout=10s - read tcp 10.0.22.7:45636->10.0.50.17:6443: read: connection reset by peer
2026-04-20T14:52:44Z node/ip-10-0-37-51.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i30yhxi7-948b4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-37-51.us-west-1.compute.internal?timeout=10s - read tcp 10.0.37.51:47100->10.0.50.17:6443: read: connection reset by peer
2026-04-20T15:48:38Z node/ip-10-0-56-21.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i30yhxi7-948b4.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-56-21.us-west-1.compute.internal?timeout=10s - read tcp 10.0.56.21:46652->10.0.50.17:6443: read: connection reset by peer
periodic-ci-openshift-multiarch-main-nightly-4.17-upgrade-from-stable-4.16-ocp-e2e-aws-ovn-upgrade-multi-x-ax (all) - 2 runs, 0% failed, 100% of runs match
#2046235078631100416junit4 hours ago
2026-04-20T15:19:30Z node/ip-10-0-120-48.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-58tvv14c-51901.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-48.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T15:49:57Z node/ip-10-0-107-246.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-58tvv14c-51901.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-107-246.ec2.internal?timeout=10s - read tcp 10.0.107.246:60024->10.0.122.194:6443: read: connection reset by peer
2026-04-20T15:53:43Z node/ip-10-0-32-248.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-58tvv14c-51901.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-32-248.ec2.internal?timeout=10s - read tcp 10.0.32.248:45346->10.0.38.116:6443: read: connection reset by peer
2026-04-20T15:53:46Z node/ip-10-0-48-5.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-58tvv14c-51901.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-48-5.ec2.internal?timeout=10s - read tcp 10.0.48.5:42824->10.0.38.116:6443: read: connection reset by peer
2026-04-20T16:08:17Z node/ip-10-0-120-48.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-58tvv14c-51901.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-48.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046235078631100416junit4 hours ago
2026-04-20T16:08:40Z node/ip-10-0-111-9.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-58tvv14c-51901.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-111-9.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T16:45:19Z node/ip-10-0-107-246.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-58tvv14c-51901.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-107-246.ec2.internal?timeout=10s - read tcp 10.0.107.246:34804->10.0.38.116:6443: read: connection reset by peer
#2046235078631100416junit4 hours ago
Apr 20 15:50:14.776 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=Service) openshift-multus/multus-admission-controller: failed to apply / update (/v1, Kind=Service) openshift-multus/multus-admission-controller: Patch "https://api-int.ci-op-58tvv14c-51901.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-multus/services/multus-admission-controller?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.98.80:49486->10.0.38.116:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 20 15:50:14.776 - 24s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=Service) openshift-multus/multus-admission-controller: failed to apply / update (/v1, Kind=Service) openshift-multus/multus-admission-controller: Patch "https://api-int.ci-op-58tvv14c-51901.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-multus/services/multus-admission-controller?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.98.80:49486->10.0.38.116:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 20 15:50:39.184 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2045872561752903680junit28 hours ago
2026-04-19T16:09:13Z node/ip-10-0-67-82.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-7ivwthdg-51901.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-67-82.us-west-1.compute.internal?timeout=10s - read tcp 10.0.67.82:52178->10.0.105.3:6443: read: connection reset by peer
2026-04-19T16:13:08Z node/ip-10-0-67-82.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-7ivwthdg-51901.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-67-82.us-west-1.compute.internal?timeout=10s - read tcp 10.0.67.82:59068->10.0.62.123:6443: read: connection reset by peer
#2045872561752903680junit28 hours ago
Apr 19 16:57:48.350 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=Namespace) /openshift-host-network: failed to apply / update (/v1, Kind=Namespace) /openshift-host-network: Patch "https://api-int.ci-op-7ivwthdg-51901.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-host-network?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.4.37:53528->10.0.62.123:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 19 16:57:48.350 - 25s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=Namespace) /openshift-host-network: failed to apply / update (/v1, Kind=Namespace) /openshift-host-network: Patch "https://api-int.ci-op-7ivwthdg-51901.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-host-network?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.4.37:53528->10.0.62.123:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 19 16:58:14.208 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
periodic-ci-openshift-release-main-ci-4.14-e2e-aws-sdn-serial (all) - 4 runs, 50% failed, 200% of failures match = 100% impact
#2046230549818773504junit4 hours ago
2026-04-20T16:31:52Z node/ip-10-0-120-31.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-nhp0wy01-51cf2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-31.ec2.internal?timeout=10s - read tcp 10.0.120.31:47196->10.0.97.71:6443: read: connection reset by peer
#2045717538028916736junit38 hours ago
2026-04-19T06:15:54Z node/ip-10-0-59-16.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-g3wh59c4-51cf2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-59-16.us-west-1.compute.internal?timeout=10s - read tcp 10.0.59.16:49226->10.0.83.127:6443: read: connection reset by peer
2026-04-19T06:15:59Z node/ip-10-0-31-59.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-g3wh59c4-51cf2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-31-59.us-west-1.compute.internal?timeout=10s - read tcp 10.0.31.59:52604->10.0.10.183:6443: read: connection reset by peer
2026-04-19T06:19:37Z node/ip-10-0-30-241.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-g3wh59c4-51cf2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-241.us-west-1.compute.internal?timeout=10s - read tcp 10.0.30.241:53134->10.0.10.183:6443: read: connection reset by peer
2026-04-19T06:23:53Z node/ip-10-0-30-241.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-g3wh59c4-51cf2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-241.us-west-1.compute.internal?timeout=10s - read tcp 10.0.30.241:41490->10.0.83.127:6443: read: connection reset by peer
#2045717538028916736junit38 hours ago
Apr 19 06:07:45.969 - 42s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (/v1, Kind=ConfigMap) openshift-cloud-network-config-controller/kube-cloud-config: failed to apply / update (/v1, Kind=ConfigMap) openshift-cloud-network-config-controller/kube-cloud-config: Patch "https://api-int.ci-op-g3wh59c4-51cf2.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-cloud-network-config-controller/configmaps/kube-cloud-config?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.117.22:51506->10.0.83.127:6443: read: connection reset by peer
Apr 19 06:20:06.275 - 42s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressnetworkpolicies.network.openshift.io: failed to apply / update (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressnetworkpolicies.network.openshift.io: Patch "https://api-int.ci-op-g3wh59c4-51cf2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/egressnetworkpolicies.network.openshift.io?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.117.22:41106->10.0.10.183:6443: read: connection reset by peer
Apr 19 06:23:50.202 - 40s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics: Patch "https://api-int.ci-op-g3wh59c4-51cf2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/network-diagnostics?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.117.22:47644->10.0.10.183:6443: read: connection reset by peer
#2045673036006297600junit41 hours ago
2026-04-19T03:08:49Z node/ip-10-0-56-241.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rib5s8sj-51cf2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-56-241.us-west-2.compute.internal?timeout=10s - read tcp 10.0.56.241:34370->10.0.65.84:6443: read: connection reset by peer
2026-04-19T03:21:17Z node/ip-10-0-40-106.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rib5s8sj-51cf2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-40-106.us-west-2.compute.internal?timeout=10s - read tcp 10.0.40.106:37074->10.0.32.112:6443: read: connection reset by peer
2026-04-19T03:25:21Z node/ip-10-0-25-125.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rib5s8sj-51cf2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-125.us-west-2.compute.internal?timeout=10s - read tcp 10.0.25.125:37272->10.0.65.84:6443: read: connection reset by peer
2026-04-19T03:25:28Z node/ip-10-0-103-51.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rib5s8sj-51cf2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-51.us-west-2.compute.internal?timeout=10s - read tcp 10.0.103.51:46580->10.0.65.84:6443: read: connection reset by peer
#2045673036006297600junit41 hours ago
Apr 19 03:12:48.441 - 42s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity: failed to apply / update (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity: Patch "https://api-int.ci-op-rib5s8sj-51cf2.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-network-node-identity/serviceaccounts/network-node-identity?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.56.241:42208->10.0.65.84:6443: read: connection reset by peer
Apr 19 03:17:13.570 - 42s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts: Patch "https://api-int.ci-op-rib5s8sj-51cf2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-multus/rolebindings/multus-whereabouts?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.56.241:44588->10.0.32.112:6443: read: connection reset by peer
Apr 19 03:26:00.272 - 42s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /cloudprivateipconfigs.cloud.network.openshift.io: failed to apply / update (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /cloudprivateipconfigs.cloud.network.openshift.io: Patch "https://api-int.ci-op-rib5s8sj-51cf2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/cloudprivateipconfigs.cloud.network.openshift.io?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.56.241:44348->10.0.65.84:6443: read: connection reset by peer
#2045631395820212224junit44 hours ago
2026-04-19T00:14:24Z node/ip-10-0-110-96.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-4dcmzghx-51cf2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-110-96.ec2.internal?timeout=10s - read tcp 10.0.110.96:58358->10.0.78.149:6443: read: connection reset by peer
2026-04-19T00:21:56Z node/ip-10-0-121-229.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-4dcmzghx-51cf2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-121-229.ec2.internal?timeout=10s - read tcp 10.0.121.229:60462->10.0.33.157:6443: read: connection reset by peer
2026-04-19T00:34:52Z node/ip-10-0-123-217.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-4dcmzghx-51cf2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-123-217.ec2.internal?timeout=10s - read tcp 10.0.123.217:59266->10.0.33.157:6443: read: connection reset by peer
2026-04-19T00:35:01Z node/ip-10-0-121-229.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-4dcmzghx-51cf2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-121-229.ec2.internal?timeout=10s - read tcp 10.0.121.229:36156->10.0.78.149:6443: read: connection reset by peer
#2045631395820212224junit44 hours ago
Apr 19 00:31:16.321 - 42s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s: Patch "https://api-int.ci-op-4dcmzghx-51cf2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-network-diagnostics/rolebindings/prometheus-k8s?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.41.155:56144->10.0.78.149:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.15-e2e-aws-sdn-serial (all) - 1 runs, 0% failed, 100% of runs match
#2046233465480810496junit4 hours ago
2026-04-20T16:14:25Z node/ip-10-0-104-183.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i8knvy56-62fab.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-104-183.us-east-2.compute.internal?timeout=10s - read tcp 10.0.104.183:57380->10.0.95.197:6443: read: connection reset by peer
2026-04-20T16:27:02Z node/ip-10-0-104-183.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i8knvy56-62fab.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-104-183.us-east-2.compute.internal?timeout=10s - read tcp 10.0.104.183:36000->10.0.95.197:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.22-e2e-agent-ha-dualstack-conformance (all) - 4 runs, 100% failed, 75% of failures match = 75% impact
#2046197545562017792junit4 hours ago
2026-04-20T14:59:19Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.82:42166->192.168.111.5:6443: read: connection reset by peer
2026-04-20T14:59:19Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.84:58440->192.168.111.5:6443: read: connection reset by peer
2026-04-20T14:59:19Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - http2: client connection force closed via ClientConn.Close
#2046016335208517632junit15 hours ago
2026-04-20T04:00:08Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T04:00:10Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::51]:50538->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-20T04:00:10Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - dial tcp 192.168.111.5:6443: connect: connection refused
#2045654134744420352junit40 hours ago
2026-04-19T03:16:18Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - http2: client connection lost (Client.Timeout exceeded while awaiting headers)
2026-04-19T03:16:18Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.5:48344->192.168.111.5:6443: read: connection reset by peer
2026-04-19T03:16:18Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - dial tcp 192.168.111.5:6443: connect: connection refused
periodic-ci-openshift-release-main-okd-scos-4.21-upgrade-from-okd-scos-4.20-e2e-aws-ovn-upgrade (all) - 5 runs, 0% failed, 100% of runs match
#2046211184494907392junit4 hours ago
2026-04-20T14:06:16Z node/ip-10-0-123-144.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zdh8jhv3-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-123-144.ec2.internal?timeout=10s - read tcp 10.0.123.144:59032->10.0.90.231:6443: read: connection reset by peer
2026-04-20T14:10:07Z node/ip-10-0-57-173.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zdh8jhv3-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-57-173.ec2.internal?timeout=10s - read tcp 10.0.57.173:37726->10.0.90.231:6443: read: connection reset by peer
2026-04-20T14:10:10Z node/ip-10-0-116-99.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zdh8jhv3-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-116-99.ec2.internal?timeout=10s - read tcp 10.0.116.99:54548->10.0.90.231:6443: read: connection reset by peer
2026-04-20T14:14:05Z node/ip-10-0-123-144.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zdh8jhv3-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-123-144.ec2.internal?timeout=10s - read tcp 10.0.123.144:41614->10.0.53.181:6443: read: connection reset by peer
2026-04-20T14:14:13Z node/ip-10-0-68-71.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zdh8jhv3-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-68-71.ec2.internal?timeout=10s - read tcp 10.0.68.71:52556->10.0.53.181:6443: read: connection reset by peer
2026-04-20T15:02:07Z node/ip-10-0-116-99.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zdh8jhv3-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-116-99.ec2.internal?timeout=10s - read tcp 10.0.116.99:51742->10.0.90.231:6443: read: connection reset by peer
2026-04-20T15:09:52Z node/ip-10-0-68-71.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zdh8jhv3-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-68-71.ec2.internal?timeout=10s - read tcp 10.0.68.71:49462->10.0.90.231:6443: read: connection reset by peer
#2046151615508910080junit8 hours ago
2026-04-20T10:33:27Z node/ip-10-0-42-175.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5gq46p8m-b40ef.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-42-175.us-east-2.compute.internal?timeout=10s - read tcp 10.0.42.175:46074->10.0.38.246:6443: read: connection reset by peer
2026-04-20T10:37:23Z node/ip-10-0-9-198.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5gq46p8m-b40ef.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-198.us-east-2.compute.internal?timeout=10s - read tcp 10.0.9.198:37734->10.0.88.171:6443: read: connection reset by peer
2026-04-20T10:37:27Z node/ip-10-0-41-101.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5gq46p8m-b40ef.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-41-101.us-east-2.compute.internal?timeout=10s - read tcp 10.0.41.101:41236->10.0.88.171:6443: read: connection reset by peer
2026-04-20T10:56:14Z node/ip-10-0-101-99.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5gq46p8m-b40ef.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-101-99.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046151615508910080junit8 hours ago
2026-04-20T10:33:27Z node/ip-10-0-42-175.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5gq46p8m-b40ef.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-42-175.us-east-2.compute.internal?timeout=10s - read tcp 10.0.42.175:46074->10.0.38.246:6443: read: connection reset by peer
2026-04-20T10:37:23Z node/ip-10-0-9-198.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5gq46p8m-b40ef.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-198.us-east-2.compute.internal?timeout=10s - read tcp 10.0.9.198:37734->10.0.88.171:6443: read: connection reset by peer
2026-04-20T10:37:27Z node/ip-10-0-41-101.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5gq46p8m-b40ef.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-41-101.us-east-2.compute.internal?timeout=10s - read tcp 10.0.41.101:41236->10.0.88.171:6443: read: connection reset by peer
2026-04-20T10:56:14Z node/ip-10-0-101-99.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5gq46p8m-b40ef.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-101-99.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046029013113114624junit16 hours ago
2026-04-20T01:57:10Z node/ip-10-0-78-121.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-2hk103n7-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-78-121.us-west-1.compute.internal?timeout=10s - read tcp 10.0.78.121:56042->10.0.2.106:6443: read: connection reset by peer
2026-04-20T03:26:36Z node/ip-10-0-113-132.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-2hk103n7-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-113-132.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046029013113114624junit16 hours ago
2026-04-20T01:57:10Z node/ip-10-0-78-121.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-2hk103n7-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-78-121.us-west-1.compute.internal?timeout=10s - read tcp 10.0.78.121:56042->10.0.2.106:6443: read: connection reset by peer
#2045847243520479232junit28 hours ago
2026-04-19T13:54:08Z node/ip-10-0-112-211.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t4nvtqc0-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-112-211.us-west-2.compute.internal?timeout=10s - read tcp 10.0.112.211:52286->10.0.32.148:6443: read: connection reset by peer
2026-04-19T13:58:06Z node/ip-10-0-34-205.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t4nvtqc0-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-34-205.us-west-2.compute.internal?timeout=10s - read tcp 10.0.34.205:35390->10.0.112.42:6443: read: connection reset by peer
2026-04-19T13:58:07Z node/ip-10-0-32-173.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t4nvtqc0-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-32-173.us-west-2.compute.internal?timeout=10s - read tcp 10.0.32.173:51382->10.0.32.148:6443: read: connection reset by peer
2026-04-19T13:58:18Z node/ip-10-0-120-219.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t4nvtqc0-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-219.us-west-2.compute.internal?timeout=10s - read tcp 10.0.120.219:59582->10.0.112.42:6443: read: connection reset by peer
2026-04-19T13:58:25Z node/ip-10-0-35-193.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t4nvtqc0-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-193.us-west-2.compute.internal?timeout=10s - read tcp 10.0.35.193:41272->10.0.112.42:6443: read: connection reset by peer
2026-04-19T14:02:11Z node/ip-10-0-17-86.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t4nvtqc0-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-17-86.us-west-2.compute.internal?timeout=10s - read tcp 10.0.17.86:50784->10.0.32.148:6443: read: connection reset by peer
2026-04-19T14:57:49Z node/ip-10-0-17-86.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t4nvtqc0-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-17-86.us-west-2.compute.internal?timeout=10s - read tcp 10.0.17.86:36376->10.0.112.42:6443: read: connection reset by peer
#2045737393868247040junit36 hours ago
2026-04-19T06:52:55Z node/ip-10-0-98-144.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mpfkkt8i-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-98-144.us-west-2.compute.internal?timeout=10s - read tcp 10.0.98.144:37594->10.0.109.153:6443: read: connection reset by peer
2026-04-19T06:53:01Z node/ip-10-0-32-90.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mpfkkt8i-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-32-90.us-west-2.compute.internal?timeout=10s - read tcp 10.0.32.90:49320->10.0.109.153:6443: read: connection reset by peer
2026-04-19T06:53:05Z node/ip-10-0-98-144.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mpfkkt8i-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-98-144.us-west-2.compute.internal?timeout=10s - read tcp 10.0.98.144:37602->10.0.109.153:6443: read: connection reset by peer
2026-04-19T07:18:51Z node/ip-10-0-77-104.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mpfkkt8i-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-77-104.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045737393868247040junit36 hours ago
2026-04-19T07:19:05Z node/ip-10-0-119-200.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mpfkkt8i-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-119-200.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T07:48:26Z node/ip-10-0-125-189.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mpfkkt8i-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-125-189.us-west-2.compute.internal?timeout=10s - read tcp 10.0.125.189:43824->10.0.45.199:6443: read: connection reset by peer
2026-04-19T07:56:36Z node/ip-10-0-125-189.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mpfkkt8i-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-125-189.us-west-2.compute.internal?timeout=10s - read tcp 10.0.125.189:44262->10.0.45.199:6443: read: connection reset by peer
2026-04-19T07:56:54Z node/ip-10-0-32-90.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-mpfkkt8i-b40ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-32-90.us-west-2.compute.internal?timeout=10s - read tcp 10.0.32.90:32840->10.0.109.153:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.20-upgrade-from-stable-4.19-e2e-aws-ovn-upgrade (all) - 16 runs, 31% failed, 280% of failures match = 88% impact
#2046208935467159552junit4 hours ago
2026-04-20T13:39:36Z node/ip-10-0-51-200.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pcb9hi5h-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-200.us-west-2.compute.internal?timeout=10s - read tcp 10.0.51.200:59384->10.0.19.175:6443: read: connection reset by peer
2026-04-20T14:01:42Z node/ip-10-0-51-200.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pcb9hi5h-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-200.us-west-2.compute.internal?timeout=10s - read tcp 10.0.51.200:33742->10.0.123.220:6443: read: connection reset by peer
2026-04-20T14:06:03Z node/ip-10-0-114-133.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pcb9hi5h-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-114-133.us-west-2.compute.internal?timeout=10s - read tcp 10.0.114.133:55972->10.0.123.220:6443: read: connection reset by peer
2026-04-20T14:06:09Z node/ip-10-0-25-37.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pcb9hi5h-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-37.us-west-2.compute.internal?timeout=10s - read tcp 10.0.25.37:33744->10.0.19.175:6443: read: connection reset by peer
2026-04-20T14:51:29Z node/ip-10-0-39-228.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pcb9hi5h-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-228.us-west-2.compute.internal?timeout=10s - read tcp 10.0.39.228:52272->10.0.19.175:6443: read: connection reset by peer
#2046208935467159552junit4 hours ago
2026-04-20T13:39:36Z node/ip-10-0-51-200.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pcb9hi5h-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-200.us-west-2.compute.internal?timeout=10s - read tcp 10.0.51.200:59384->10.0.19.175:6443: read: connection reset by peer
2026-04-20T14:01:42Z node/ip-10-0-51-200.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pcb9hi5h-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-200.us-west-2.compute.internal?timeout=10s - read tcp 10.0.51.200:33742->10.0.123.220:6443: read: connection reset by peer
2026-04-20T14:06:03Z node/ip-10-0-114-133.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pcb9hi5h-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-114-133.us-west-2.compute.internal?timeout=10s - read tcp 10.0.114.133:55972->10.0.123.220:6443: read: connection reset by peer
2026-04-20T14:06:09Z node/ip-10-0-25-37.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pcb9hi5h-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-37.us-west-2.compute.internal?timeout=10s - read tcp 10.0.25.37:33744->10.0.19.175:6443: read: connection reset by peer
2026-04-20T14:51:29Z node/ip-10-0-39-228.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pcb9hi5h-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-228.us-west-2.compute.internal?timeout=10s - read tcp 10.0.39.228:52272->10.0.19.175:6443: read: connection reset by peer
#2046139044164800512junit9 hours ago
2026-04-20T08:57:22Z node/ip-10-0-22-147.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i2t4idnx-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-147.us-west-2.compute.internal?timeout=10s - read tcp 10.0.22.147:54704->10.0.24.112:6443: read: connection reset by peer
2026-04-20T09:01:16Z node/ip-10-0-61-104.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i2t4idnx-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-104.us-west-2.compute.internal?timeout=10s - read tcp 10.0.61.104:33444->10.0.24.112:6443: read: connection reset by peer
2026-04-20T09:05:05Z node/ip-10-0-87-114.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i2t4idnx-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-87-114.us-west-2.compute.internal?timeout=10s - read tcp 10.0.87.114:57144->10.0.24.112:6443: read: connection reset by peer
2026-04-20T09:05:06Z node/ip-10-0-122-9.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i2t4idnx-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-122-9.us-west-2.compute.internal?timeout=10s - read tcp 10.0.122.9:49446->10.0.127.72:6443: read: connection reset by peer
2026-04-20T09:05:40Z node/ip-10-0-26-131.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i2t4idnx-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-131.us-west-2.compute.internal?timeout=10s - read tcp 10.0.26.131:52568->10.0.127.72:6443: read: connection reset by peer
2026-04-20T09:32:44Z node/ip-10-0-61-104.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i2t4idnx-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-104.us-west-2.compute.internal?timeout=10s - read tcp 10.0.61.104:49956->10.0.24.112:6443: read: connection reset by peer
2026-04-20T09:52:06Z node/ip-10-0-122-9.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i2t4idnx-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-122-9.us-west-2.compute.internal?timeout=10s - context deadline exceeded
#2046048406404599808junit15 hours ago
2026-04-20T03:18:40Z node/ip-10-0-6-230.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fchdl6h8-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-230.us-east-2.compute.internal?timeout=10s - read tcp 10.0.6.230:58332->10.0.2.244:6443: read: connection reset by peer
2026-04-20T03:26:46Z node/ip-10-0-39-120.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fchdl6h8-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-120.us-east-2.compute.internal?timeout=10s - read tcp 10.0.39.120:44966->10.0.120.54:6443: read: connection reset by peer
2026-04-20T03:27:00Z node/ip-10-0-6-230.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fchdl6h8-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-230.us-east-2.compute.internal?timeout=10s - read tcp 10.0.6.230:34742->10.0.2.244:6443: read: connection reset by peer
2026-04-20T04:10:58Z node/ip-10-0-39-120.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fchdl6h8-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-120.us-east-2.compute.internal?timeout=10s - read tcp 10.0.39.120:43402->10.0.2.244:6443: read: connection reset by peer
2026-04-20T04:11:09Z node/ip-10-0-39-120.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fchdl6h8-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-120.us-east-2.compute.internal?timeout=10s - read tcp 10.0.39.120:44020->10.0.2.244:6443: read: connection reset by peer
2026-04-20T04:17:53Z node/ip-10-0-120-27.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fchdl6h8-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-27.us-east-2.compute.internal?timeout=10s - read tcp 10.0.120.27:39790->10.0.2.244:6443: read: connection reset by peer
#2046033483872079872junit16 hours ago
2026-04-20T02:15:33Z node/ip-10-0-37-134.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l47vkt8f-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-37-134.us-west-2.compute.internal?timeout=10s - read tcp 10.0.37.134:51414->10.0.110.217:6443: read: connection reset by peer
2026-04-20T02:19:44Z node/ip-10-0-64-46.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l47vkt8f-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-64-46.us-west-2.compute.internal?timeout=10s - read tcp 10.0.64.46:41566->10.0.44.50:6443: read: connection reset by peer
2026-04-20T02:19:44Z node/ip-10-0-68-105.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l47vkt8f-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-68-105.us-west-2.compute.internal?timeout=10s - read tcp 10.0.68.105:46482->10.0.110.217:6443: read: connection reset by peer
2026-04-20T03:17:11Z node/ip-10-0-50-144.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l47vkt8f-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-50-144.us-west-2.compute.internal?timeout=10s - read tcp 10.0.50.144:38964->10.0.44.50:6443: read: connection reset by peer
#2046033483872079872junit16 hours ago
2026-04-20T01:57:08Z node/ip-10-0-91-26.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l47vkt8f-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-91-26.us-west-2.compute.internal?timeout=10s - read tcp 10.0.91.26:39074->10.0.110.217:6443: read: connection reset by peer
2026-04-20T02:15:33Z node/ip-10-0-37-134.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l47vkt8f-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-37-134.us-west-2.compute.internal?timeout=10s - read tcp 10.0.37.134:51414->10.0.110.217:6443: read: connection reset by peer
2026-04-20T02:19:44Z node/ip-10-0-64-46.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l47vkt8f-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-64-46.us-west-2.compute.internal?timeout=10s - read tcp 10.0.64.46:41566->10.0.44.50:6443: read: connection reset by peer
2026-04-20T02:19:44Z node/ip-10-0-68-105.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l47vkt8f-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-68-105.us-west-2.compute.internal?timeout=10s - read tcp 10.0.68.105:46482->10.0.110.217:6443: read: connection reset by peer
2026-04-20T03:17:11Z node/ip-10-0-50-144.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l47vkt8f-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-50-144.us-west-2.compute.internal?timeout=10s - read tcp 10.0.50.144:38964->10.0.44.50:6443: read: connection reset by peer
#2045949759834820608junit22 hours ago
2026-04-19T20:29:52Z node/ip-10-0-32-48.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0tchvx6m-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-32-48.us-east-2.compute.internal?timeout=10s - read tcp 10.0.32.48:60388->10.0.22.7:6443: read: connection reset by peer
2026-04-19T20:51:09Z node/ip-10-0-74-132.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0tchvx6m-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-74-132.us-east-2.compute.internal?timeout=10s - read tcp 10.0.74.132:41870->10.0.92.250:6443: read: connection reset by peer
2026-04-19T20:54:53Z node/ip-10-0-12-126.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0tchvx6m-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-12-126.us-east-2.compute.internal?timeout=10s - read tcp 10.0.12.126:54436->10.0.22.7:6443: read: connection reset by peer
2026-04-19T20:54:54Z node/ip-10-0-120-26.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0tchvx6m-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-26.us-east-2.compute.internal?timeout=10s - read tcp 10.0.120.26:57716->10.0.22.7:6443: read: connection reset by peer
2026-04-19T20:55:04Z node/ip-10-0-120-26.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0tchvx6m-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-26.us-east-2.compute.internal?timeout=10s - read tcp 10.0.120.26:57796->10.0.22.7:6443: read: connection reset by peer
2026-04-19T21:46:47Z node/ip-10-0-24-11.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0tchvx6m-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-11.us-east-2.compute.internal?timeout=10s - read tcp 10.0.24.11:58842->10.0.22.7:6443: read: connection reset by peer
#2045925914184781824junit23 hours ago
2026-04-19T18:50:57Z node/ip-10-0-80-85.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5q5vbhgf-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-80-85.ec2.internal?timeout=10s - read tcp 10.0.80.85:42912->10.0.2.167:6443: read: connection reset by peer
2026-04-19T18:51:29Z node/ip-10-0-11-104.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5q5vbhgf-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-104.ec2.internal?timeout=10s - read tcp 10.0.11.104:54674->10.0.2.167:6443: read: connection reset by peer
2026-04-19T18:51:38Z node/ip-10-0-126-179.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5q5vbhgf-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-126-179.ec2.internal?timeout=10s - read tcp 10.0.126.179:44896->10.0.2.167:6443: read: connection reset by peer
2026-04-19T19:08:34Z node/ip-10-0-103-157.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5q5vbhgf-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-157.ec2.internal?timeout=10s - read tcp 10.0.103.157:45400->10.0.2.167:6443: read: connection reset by peer
2026-04-19T19:12:29Z node/ip-10-0-103-157.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5q5vbhgf-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-157.ec2.internal?timeout=10s - read tcp 10.0.103.157:56300->10.0.2.167:6443: read: connection reset by peer
2026-04-19T19:12:34Z node/ip-10-0-80-85.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5q5vbhgf-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-80-85.ec2.internal?timeout=10s - read tcp 10.0.80.85:45296->10.0.2.167:6443: read: connection reset by peer
2026-04-19T19:13:15Z node/ip-10-0-44-102.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5q5vbhgf-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-44-102.ec2.internal?timeout=10s - read tcp 10.0.44.102:42910->10.0.2.167:6443: read: connection reset by peer
2026-04-19T19:16:47Z node/ip-10-0-126-179.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5q5vbhgf-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-126-179.ec2.internal?timeout=10s - read tcp 10.0.126.179:52352->10.0.2.167:6443: read: connection reset by peer
2026-04-19T19:58:17Z node/ip-10-0-126-179.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5q5vbhgf-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-126-179.ec2.internal?timeout=10s - context deadline exceeded
#2045859922054221824junit28 hours ago
2026-04-19T14:43:50Z node/ip-10-0-2-121.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fth8d2nd-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-2-121.us-west-1.compute.internal?timeout=10s - read tcp 10.0.2.121:51988->10.0.87.237:6443: read: connection reset by peer
2026-04-19T14:43:51Z node/ip-10-0-62-244.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fth8d2nd-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-244.us-west-1.compute.internal?timeout=10s - read tcp 10.0.62.244:42028->10.0.44.69:6443: read: connection reset by peer
2026-04-19T14:48:24Z node/ip-10-0-3-239.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fth8d2nd-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-3-239.us-west-1.compute.internal?timeout=10s - read tcp 10.0.3.239:51028->10.0.44.69:6443: read: connection reset by peer
2026-04-19T14:52:01Z node/ip-10-0-62-244.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fth8d2nd-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-244.us-west-1.compute.internal?timeout=10s - read tcp 10.0.62.244:48130->10.0.87.237:6443: read: connection reset by peer
2026-04-19T14:52:06Z node/ip-10-0-103-246.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fth8d2nd-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-246.us-west-1.compute.internal?timeout=10s - read tcp 10.0.103.246:54166->10.0.87.237:6443: read: connection reset by peer
2026-04-19T15:28:35Z node/ip-10-0-3-239.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fth8d2nd-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-3-239.us-west-1.compute.internal?timeout=10s - read tcp 10.0.3.239:33052->10.0.87.237:6443: read: connection reset by peer
2026-04-19T15:35:51Z node/ip-10-0-67-224.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fth8d2nd-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-67-224.us-west-1.compute.internal?timeout=10s - read tcp 10.0.67.224:46232->10.0.44.69:6443: read: connection reset by peer
2026-04-19T15:42:31Z node/ip-10-0-42-167.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-fth8d2nd-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-42-167.us-west-1.compute.internal?timeout=10s - read tcp 10.0.42.167:47116->10.0.44.69:6443: read: connection reset by peer
#2045811576170090496junit31 hours ago
2026-04-19T11:31:00Z node/ip-10-0-28-201.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t4wtvzq1-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-28-201.us-west-2.compute.internal?timeout=10s - read tcp 10.0.28.201:34598->10.0.27.60:6443: read: connection reset by peer
2026-04-19T11:39:20Z node/ip-10-0-28-201.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t4wtvzq1-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-28-201.us-west-2.compute.internal?timeout=10s - read tcp 10.0.28.201:46744->10.0.84.205:6443: read: connection reset by peer
2026-04-19T12:21:20Z node/ip-10-0-96-120.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t4wtvzq1-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-96-120.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T12:29:20Z node/ip-10-0-126-225.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t4wtvzq1-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-126-225.us-west-2.compute.internal?timeout=10s - context deadline exceeded
2026-04-19T12:33:06Z node/ip-10-0-96-120.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t4wtvzq1-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-96-120.us-west-2.compute.internal?timeout=10s - read tcp 10.0.96.120:50574->10.0.84.205:6443: read: connection reset by peer
2026-04-19T12:33:26Z node/ip-10-0-96-120.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t4wtvzq1-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-96-120.us-west-2.compute.internal?timeout=10s - read tcp 10.0.96.120:36422->10.0.84.205:6443: read: connection reset by peer
2026-04-19T12:33:27Z node/ip-10-0-14-129.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t4wtvzq1-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-14-129.us-west-2.compute.internal?timeout=10s - read tcp 10.0.14.129:46728->10.0.27.60:6443: read: connection reset by peer
#2045811576170090496junit31 hours ago
2026-04-19T11:13:13Z node/ip-10-0-126-225.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t4wtvzq1-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-126-225.us-west-2.compute.internal?timeout=10s - read tcp 10.0.126.225:59342->10.0.84.205:6443: read: connection reset by peer
2026-04-19T11:31:00Z node/ip-10-0-28-201.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t4wtvzq1-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-28-201.us-west-2.compute.internal?timeout=10s - read tcp 10.0.28.201:34598->10.0.27.60:6443: read: connection reset by peer
2026-04-19T11:39:20Z node/ip-10-0-28-201.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t4wtvzq1-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-28-201.us-west-2.compute.internal?timeout=10s - read tcp 10.0.28.201:46744->10.0.84.205:6443: read: connection reset by peer
2026-04-19T12:21:20Z node/ip-10-0-96-120.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t4wtvzq1-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-96-120.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045794405247356928junit32 hours ago
2026-04-19T10:30:44Z node/ip-10-0-37-99.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rpsqs0jt-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-37-99.us-west-1.compute.internal?timeout=10s - read tcp 10.0.37.99:38592->10.0.40.12:6443: read: connection reset by peer
2026-04-19T10:31:06Z node/ip-10-0-70-188.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rpsqs0jt-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-70-188.us-west-1.compute.internal?timeout=10s - read tcp 10.0.70.188:55180->10.0.40.12:6443: read: connection reset by peer
2026-04-19T11:11:33Z node/ip-10-0-46-132.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rpsqs0jt-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-46-132.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T11:15:31Z node/ip-10-0-77-18.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rpsqs0jt-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-77-18.us-west-1.compute.internal?timeout=10s - read tcp 10.0.77.18:49174->10.0.96.47:6443: read: connection reset by peer
2026-04-19T11:22:06Z node/ip-10-0-46-132.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rpsqs0jt-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-46-132.us-west-1.compute.internal?timeout=10s - read tcp 10.0.46.132:52030->10.0.40.12:6443: read: connection reset by peer
#2045794405247356928junit32 hours ago
2026-04-19T10:30:44Z node/ip-10-0-37-99.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rpsqs0jt-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-37-99.us-west-1.compute.internal?timeout=10s - read tcp 10.0.37.99:38592->10.0.40.12:6443: read: connection reset by peer
2026-04-19T10:31:06Z node/ip-10-0-70-188.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rpsqs0jt-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-70-188.us-west-1.compute.internal?timeout=10s - read tcp 10.0.70.188:55180->10.0.40.12:6443: read: connection reset by peer
2026-04-19T11:11:33Z node/ip-10-0-46-132.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rpsqs0jt-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-46-132.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045726229629243392junit36 hours ago
2026-04-19T06:07:35Z node/ip-10-0-67-133.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-32yx6mzx-ab92b.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-67-133.us-west-2.compute.internal?timeout=10s - read tcp 10.0.67.133:60232->10.0.48.206:6443: read: connection reset by peer
2026-04-19T06:07:36Z node/ip-10-0-100-216.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-32yx6mzx-ab92b.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-216.us-west-2.compute.internal?timeout=10s - read tcp 10.0.100.216:44270->10.0.48.206:6443: read: connection reset by peer
2026-04-19T06:07:39Z node/ip-10-0-9-237.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-32yx6mzx-ab92b.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-237.us-west-2.compute.internal?timeout=10s - read tcp 10.0.9.237:45756->10.0.48.206:6443: read: connection reset by peer
2026-04-19T06:24:29Z node/ip-10-0-100-216.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-32yx6mzx-ab92b.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-216.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045726229629243392junit36 hours ago
2026-04-19T06:07:35Z node/ip-10-0-67-133.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-32yx6mzx-ab92b.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-67-133.us-west-2.compute.internal?timeout=10s - read tcp 10.0.67.133:60232->10.0.48.206:6443: read: connection reset by peer
2026-04-19T06:07:36Z node/ip-10-0-100-216.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-32yx6mzx-ab92b.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-216.us-west-2.compute.internal?timeout=10s - read tcp 10.0.100.216:44270->10.0.48.206:6443: read: connection reset by peer
2026-04-19T06:07:39Z node/ip-10-0-9-237.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-32yx6mzx-ab92b.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-237.us-west-2.compute.internal?timeout=10s - read tcp 10.0.9.237:45756->10.0.48.206:6443: read: connection reset by peer
2026-04-19T06:24:29Z node/ip-10-0-100-216.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-32yx6mzx-ab92b.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-216.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045679444290441216junit40 hours ago
2026-04-19T02:54:16Z node/ip-10-0-47-166.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-c9mv1zks-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-47-166.ec2.internal?timeout=10s - read tcp 10.0.47.166:42814->10.0.41.41:6443: read: connection reset by peer
2026-04-19T02:54:28Z node/ip-10-0-0-143.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-c9mv1zks-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-143.ec2.internal?timeout=10s - read tcp 10.0.0.143:53214->10.0.41.41:6443: read: connection reset by peer
2026-04-19T03:34:56Z node/ip-10-0-0-143.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-c9mv1zks-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-143.ec2.internal?timeout=10s - read tcp 10.0.0.143:57500->10.0.90.4:6443: read: connection reset by peer
2026-04-19T03:42:03Z node/ip-10-0-0-143.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-c9mv1zks-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-143.ec2.internal?timeout=10s - read tcp 10.0.0.143:49546->10.0.41.41:6443: read: connection reset by peer
2026-04-19T03:45:23Z node/ip-10-0-20-3.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-c9mv1zks-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-20-3.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T03:50:13Z node/ip-10-0-0-143.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-c9mv1zks-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-143.ec2.internal?timeout=10s - read tcp 10.0.0.143:44216->10.0.41.41:6443: read: connection reset by peer
2026-04-19T03:50:21Z node/ip-10-0-107-243.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-c9mv1zks-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-107-243.ec2.internal?timeout=10s - read tcp 10.0.107.243:53974->10.0.90.4:6443: read: connection reset by peer
#2045659097344249856junit41 hours ago
2026-04-19T01:14:10Z node/ip-10-0-52-94.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xkc23329-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-94.us-west-2.compute.internal?timeout=10s - read tcp 10.0.52.94:50178->10.0.7.213:6443: read: connection reset by peer
2026-04-19T01:14:18Z node/ip-10-0-110-95.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xkc23329-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-110-95.us-west-2.compute.internal?timeout=10s - read tcp 10.0.110.95:59254->10.0.7.213:6443: read: connection reset by peer
2026-04-19T01:14:24Z node/ip-10-0-51-168.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xkc23329-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-168.us-west-2.compute.internal?timeout=10s - read tcp 10.0.51.168:54836->10.0.67.82:6443: read: connection reset by peer
2026-04-19T01:32:36Z node/ip-10-0-51-168.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xkc23329-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-168.us-west-2.compute.internal?timeout=10s - read tcp 10.0.51.168:57766->10.0.7.213:6443: read: connection reset by peer
2026-04-19T01:32:42Z node/ip-10-0-101-213.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xkc23329-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-101-213.us-west-2.compute.internal?timeout=10s - read tcp 10.0.101.213:48492->10.0.67.82:6443: read: connection reset by peer
2026-04-19T01:32:50Z node/ip-10-0-113-130.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xkc23329-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-113-130.us-west-2.compute.internal?timeout=10s - read tcp 10.0.113.130:49510->10.0.7.213:6443: read: connection reset by peer
2026-04-19T01:37:19Z node/ip-10-0-101-213.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xkc23329-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-101-213.us-west-2.compute.internal?timeout=10s - read tcp 10.0.101.213:33012->10.0.67.82:6443: read: connection reset by peer
2026-04-19T01:40:52Z node/ip-10-0-52-94.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xkc23329-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-94.us-west-2.compute.internal?timeout=10s - read tcp 10.0.52.94:33602->10.0.7.213:6443: read: connection reset by peer
2026-04-19T02:18:58Z node/ip-10-0-101-213.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xkc23329-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-101-213.us-west-2.compute.internal?timeout=10s - read tcp 10.0.101.213:48476->10.0.7.213:6443: read: connection reset by peer
2026-04-19T02:26:15Z node/ip-10-0-51-168.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xkc23329-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-168.us-west-2.compute.internal?timeout=10s - read tcp 10.0.51.168:43578->10.0.7.213:6443: read: connection reset by peer
2026-04-19T02:29:43Z node/ip-10-0-101-213.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xkc23329-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-101-213.us-west-2.compute.internal?timeout=10s - context deadline exceeded
#2045600180677382144junit45 hours ago
2026-04-18T21:17:15Z node/ip-10-0-100-161.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-k1djt7rn-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-161.ec2.internal?timeout=10s - read tcp 10.0.100.161:43594->10.0.10.29:6443: read: connection reset by peer
2026-04-18T21:17:16Z node/ip-10-0-50-112.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-k1djt7rn-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-50-112.ec2.internal?timeout=10s - read tcp 10.0.50.112:47252->10.0.10.29:6443: read: connection reset by peer
2026-04-18T21:17:22Z node/ip-10-0-113-236.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-k1djt7rn-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-113-236.ec2.internal?timeout=10s - read tcp 10.0.113.236:49938->10.0.10.29:6443: read: connection reset by peer
2026-04-18T21:17:25Z node/ip-10-0-100-161.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-k1djt7rn-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-161.ec2.internal?timeout=10s - read tcp 10.0.100.161:43902->10.0.10.29:6443: read: connection reset by peer
2026-04-18T21:39:28Z node/ip-10-0-113-236.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-k1djt7rn-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-113-236.ec2.internal?timeout=10s - read tcp 10.0.113.236:50570->10.0.67.149:6443: read: connection reset by peer
2026-04-18T21:39:33Z node/ip-10-0-50-112.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-k1djt7rn-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-50-112.ec2.internal?timeout=10s - read tcp 10.0.50.112:47538->10.0.10.29:6443: read: connection reset by peer
2026-04-18T22:27:44Z node/ip-10-0-113-236.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-k1djt7rn-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-113-236.ec2.internal?timeout=10s - read tcp 10.0.113.236:51832->10.0.10.29:6443: read: connection reset by peer
#2045568473723047936junit47 hours ago
2026-04-18T19:23:14Z node/ip-10-0-98-210.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-nglk1c83-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-98-210.ec2.internal?timeout=10s - read tcp 10.0.98.210:38730->10.0.21.144:6443: read: connection reset by peer
2026-04-18T19:27:21Z node/ip-10-0-90-183.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-nglk1c83-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-183.ec2.internal?timeout=10s - read tcp 10.0.90.183:38928->10.0.86.247:6443: read: connection reset by peer
2026-04-18T19:31:46Z node/ip-10-0-90-183.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-nglk1c83-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-183.ec2.internal?timeout=10s - read tcp 10.0.90.183:38966->10.0.86.247:6443: read: connection reset by peer
2026-04-18T19:31:56Z node/ip-10-0-90-183.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-nglk1c83-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-183.ec2.internal?timeout=10s - read tcp 10.0.90.183:38974->10.0.86.247:6443: read: connection reset by peer
2026-04-18T20:10:06Z node/ip-10-0-126-248.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-nglk1c83-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-126-248.ec2.internal?timeout=10s - read tcp 10.0.126.248:43506->10.0.86.247:6443: read: connection reset by peer
2026-04-18T20:17:01Z node/ip-10-0-23-84.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-nglk1c83-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-84.ec2.internal?timeout=10s - read tcp 10.0.23.84:57048->10.0.86.247:6443: read: connection reset by peer
2026-04-18T20:17:04Z node/ip-10-0-103-32.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-nglk1c83-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-32.ec2.internal?timeout=10s - read tcp 10.0.103.32:57518->10.0.21.144:6443: read: connection reset by peer
2026-04-18T20:20:32Z node/ip-10-0-126-248.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-nglk1c83-ab92b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-126-248.ec2.internal?timeout=10s - context deadline exceeded
pull-ci-openshift-cluster-network-operator-release-4.18-e2e-aws-ovn-hypershift-conformance (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#2046251451646218240junit4 hours ago
# [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with externalTrafficPolicy=Local [Suite:openshift/conformance/parallel] [Suite:k8s]
fail [k8s.io/kubernetes/test/e2e/framework/framework.go:396]: Couldn't delete ns: "e2e-services-2925": Delete "https://a17188d48c7ea4c10ab2637bbeadc710-c1ae9e5e03a55826.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/e2e-services-2925": read tcp 172.24.74.7:48042->52.3.12.132:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://a17188d48c7ea4c10ab2637bbeadc710-c1ae9e5e03a55826.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/e2e-services-2925", Err:(*net.OpError)(0xc001acbe50)})
Error: exit with code 1
#2046251451646218240junit4 hours ago
# [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
fail [k8s.io/kubernetes/test/e2e/framework/network/utils.go:929]: Error creating Pod: Post "https://a17188d48c7ea4c10ab2637bbeadc710-c1ae9e5e03a55826.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/e2e-pod-network-test-4962/pods": read tcp 172.24.74.7:53836->52.3.12.132:6443: read: connection reset by peer
Error: exit with code 1
periodic-ci-openshift-release-main-ci-4.22-e2e-vsphere-ovn-upgrade (all) - 7 runs, 0% failed, 100% of runs match
#2046237120745443328junit4 hours ago
2026-04-20T16:39:36Z node/ci-op-1l09xm6p-29d19-zc5xf-worker-0-pb7xh - reason/FailedToUpdateLease https://api-int.ci-op-1l09xm6p-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-1l09xm6p-29d19-zc5xf-worker-0-pb7xh?timeout=10s - read tcp 10.221.127.117:41418->10.221.127.10:6443: read: connection reset by peer
2026-04-20T16:39:36Z node/ci-op-1l09xm6p-29d19-zc5xf-worker-0-gl96k - reason/FailedToUpdateLease https://api-int.ci-op-1l09xm6p-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-1l09xm6p-29d19-zc5xf-worker-0-gl96k?timeout=10s - read tcp 10.221.127.63:55842->10.221.127.10:6443: read: connection reset by peer
2026-04-20T16:39:38Z node/ci-op-1l09xm6p-29d19-zc5xf-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-1l09xm6p-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-1l09xm6p-29d19-zc5xf-master-0?timeout=10s - read tcp 10.221.127.25:59212->10.221.127.10:6443: read: connection reset by peer
2026-04-20T16:39:42Z node/ci-op-1l09xm6p-29d19-zc5xf-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-1l09xm6p-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-1l09xm6p-29d19-zc5xf-master-1?timeout=10s - read tcp 10.221.127.38:34674->10.221.127.10:6443: read: connection reset by peer
#2046144455160893440junit11 hours ago
2026-04-20T09:58:16Z node/ci-op-1y0k0flz-29d19-tx52j-worker-0-b4dwf - reason/FailedToUpdateLease https://api-int.ci-op-1y0k0flz-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-1y0k0flz-29d19-tx52j-worker-0-b4dwf?timeout=10s - read tcp 10.95.160.30:32790->10.95.160.12:6443: read: connection reset by peer
2026-04-20T09:58:16Z node/ci-op-1y0k0flz-29d19-tx52j-worker-0-bw5cw - reason/FailedToUpdateLease https://api-int.ci-op-1y0k0flz-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-1y0k0flz-29d19-tx52j-worker-0-bw5cw?timeout=10s - read tcp 10.95.160.28:48874->10.95.160.12:6443: read: connection reset by peer
2026-04-20T09:58:21Z node/ci-op-1y0k0flz-29d19-tx52j-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-1y0k0flz-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-1y0k0flz-29d19-tx52j-master-2?timeout=10s - read tcp 10.95.160.123:39050->10.95.160.12:6443: read: connection reset by peer
2026-04-20T09:58:23Z node/ci-op-1y0k0flz-29d19-tx52j-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-1y0k0flz-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-1y0k0flz-29d19-tx52j-master-0?timeout=10s - read tcp 10.95.160.122:57346->10.95.160.12:6443: read: connection reset by peer
#2046044909470748672junit18 hours ago
2026-04-20T03:19:09Z node/ci-op-jj9vlfik-29d19-ggd4r-worker-0-59rh6 - reason/FailedToUpdateLease https://api-int.ci-op-jj9vlfik-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-jj9vlfik-29d19-ggd4r-worker-0-59rh6?timeout=10s - read tcp 10.93.251.116:45780->10.93.251.6:6443: read: connection reset by peer
2026-04-20T03:19:10Z node/ci-op-jj9vlfik-29d19-ggd4r-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-jj9vlfik-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-jj9vlfik-29d19-ggd4r-master-2?timeout=10s - read tcp 10.93.251.112:53452->10.93.251.6:6443: read: connection reset by peer
2026-04-20T03:25:16Z node/ci-op-jj9vlfik-29d19-ggd4r-worker-0-59rh6 - reason/FailedToUpdateLease https://api-int.ci-op-jj9vlfik-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-jj9vlfik-29d19-ggd4r-worker-0-59rh6?timeout=10s - read tcp 10.93.251.116:45330->10.93.251.6:6443: read: connection reset by peer
#2045949234657628160junit24 hours ago
2026-04-19T21:11:34Z node/ci-op-xylwp1g8-29d19-ql4fx-worker-0-l79ht - reason/FailedToUpdateLease https://api-int.ci-op-xylwp1g8-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-xylwp1g8-29d19-ql4fx-worker-0-l79ht?timeout=10s - read tcp 10.93.152.44:36832->10.93.152.14:6443: read: connection reset by peer
2026-04-19T21:11:35Z node/ci-op-xylwp1g8-29d19-ql4fx-worker-0-x67ml - reason/FailedToUpdateLease https://api-int.ci-op-xylwp1g8-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-xylwp1g8-29d19-ql4fx-worker-0-x67ml?timeout=10s - read tcp 10.93.152.45:52242->10.93.152.14:6443: read: connection reset by peer
2026-04-19T21:11:37Z node/ci-op-xylwp1g8-29d19-ql4fx-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-xylwp1g8-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-xylwp1g8-29d19-ql4fx-master-1?timeout=10s - read tcp 10.93.152.34:38218->10.93.152.14:6443: read: connection reset by peer
2026-04-19T21:11:37Z node/ci-op-xylwp1g8-29d19-ql4fx-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-xylwp1g8-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-xylwp1g8-29d19-ql4fx-master-2?timeout=10s - read tcp 10.93.152.36:57252->10.93.152.14:6443: read: connection reset by peer
#2045833567291838464junit32 hours ago
2026-04-19T13:23:14Z node/ci-op-b6ib67ts-29d19-995tp-worker-0-zkpkw - reason/FailedToUpdateLease https://api-int.ci-op-b6ib67ts-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-b6ib67ts-29d19-995tp-worker-0-zkpkw?timeout=10s - read tcp 10.221.127.28:41284->10.221.127.4:6443: read: connection reset by peer
2026-04-19T13:23:15Z node/ci-op-b6ib67ts-29d19-995tp-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-b6ib67ts-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-b6ib67ts-29d19-995tp-master-0?timeout=10s - read tcp 10.221.127.69:51168->10.221.127.4:6443: read: connection reset by peer
2026-04-19T13:23:21Z node/ci-op-b6ib67ts-29d19-995tp-worker-0-zhv58 - reason/FailedToUpdateLease https://api-int.ci-op-b6ib67ts-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-b6ib67ts-29d19-995tp-worker-0-zhv58?timeout=10s - read tcp 10.221.127.37:34706->10.221.127.4:6443: read: connection reset by peer
#2045733564271562752junit38 hours ago
2026-04-19T06:15:38Z node/ci-op-fxqx5bm5-29d19-thlvf-worker-0-mdl92 - reason/FailedToUpdateLease https://api-int.ci-op-fxqx5bm5-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-fxqx5bm5-29d19-thlvf-worker-0-mdl92?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T06:44:21Z node/ci-op-fxqx5bm5-29d19-thlvf-worker-0-mdl92 - reason/FailedToUpdateLease https://api-int.ci-op-fxqx5bm5-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-fxqx5bm5-29d19-thlvf-worker-0-mdl92?timeout=10s - read tcp 10.93.152.111:60668->10.93.152.12:6443: read: connection reset by peer
2026-04-19T06:44:22Z node/ci-op-fxqx5bm5-29d19-thlvf-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-fxqx5bm5-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-fxqx5bm5-29d19-thlvf-master-1?timeout=10s - read tcp 10.93.152.107:44240->10.93.152.12:6443: read: connection reset by peer
2026-04-19T06:44:23Z node/ci-op-fxqx5bm5-29d19-thlvf-worker-0-tfmjr - reason/FailedToUpdateLease https://api-int.ci-op-fxqx5bm5-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-fxqx5bm5-29d19-thlvf-worker-0-tfmjr?timeout=10s - read tcp 10.93.152.112:48498->10.93.152.12:6443: read: connection reset by peer
2026-04-19T06:50:18Z node/ci-op-fxqx5bm5-29d19-thlvf-worker-0-mdl92 - reason/FailedToUpdateLease https://api-int.ci-op-fxqx5bm5-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-fxqx5bm5-29d19-thlvf-worker-0-mdl92?timeout=10s - read tcp 10.93.152.111:33286->10.93.152.12:6443: read: connection reset by peer
2026-04-19T06:50:20Z node/ci-op-fxqx5bm5-29d19-thlvf-worker-0-tfmjr - reason/FailedToUpdateLease https://api-int.ci-op-fxqx5bm5-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-fxqx5bm5-29d19-thlvf-worker-0-tfmjr?timeout=10s - read tcp 10.93.152.112:56878->10.93.152.12:6443: read: connection reset by peer
2026-04-19T06:50:20Z node/ci-op-fxqx5bm5-29d19-thlvf-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-fxqx5bm5-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-fxqx5bm5-29d19-thlvf-master-1?timeout=10s - read tcp 10.93.152.107:36532->10.93.152.12:6443: read: connection reset by peer
2026-04-19T06:50:24Z node/ci-op-fxqx5bm5-29d19-thlvf-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-fxqx5bm5-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-fxqx5bm5-29d19-thlvf-master-0?timeout=10s - read tcp 10.93.152.108:33768->10.93.152.12:6443: read: connection reset by peer
#2045643417576280064junit44 hours ago
2026-04-19T01:04:03Z node/ci-op-wd6tx584-29d19-gh2bv-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-wd6tx584-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-wd6tx584-29d19-gh2bv-master-1?timeout=10s - read tcp 10.95.160.85:60152->10.95.160.12:6443: read: connection reset by peer
2026-04-19T01:04:08Z node/ci-op-wd6tx584-29d19-gh2bv-worker-0-ws5l7 - reason/FailedToUpdateLease https://api-int.ci-op-wd6tx584-29d19.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-wd6tx584-29d19-gh2bv-worker-0-ws5l7?timeout=10s - read tcp 10.95.160.94:57384->10.95.160.12:6443: read: connection reset by peer
pull-ci-openshift-api-master-e2e-aws-ovn-hypershift-conformance (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046231464697139200junit5 hours ago
    <*url.Error | 0xc00a0f2090>:
    Post "https://aebaa8736a90243ab93e702ff32fdd3e-0e938945f4705ae1.elb.us-east-1.amazonaws.com:6443/apis/oauth.openshift.io/v1/oauthclients": read tcp 10.130.124.10:44222->44.196.27.37:6443: read: connection reset by peer
    {
periodic-ci-openshift-release-main-ci-4.18-e2e-aws-ovn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#2046219360766267392junit5 hours ago
2026-04-20T14:17:35Z node/ip-10-0-53-110.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-hzc0q9xt-98515.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-110.us-east-2.compute.internal?timeout=10s - read tcp 10.0.53.110:53442->10.0.26.224:6443: read: connection reset by peer
2026-04-20T14:17:35Z node/ip-10-0-74-68.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-hzc0q9xt-98515.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-74-68.us-east-2.compute.internal?timeout=10s - read tcp 10.0.74.68:46144->10.0.26.224:6443: read: connection reset by peer
2026-04-20T14:33:34Z node/ip-10-0-80-148.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-hzc0q9xt-98515.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-80-148.us-east-2.compute.internal?timeout=10s - read tcp 10.0.80.148:33086->10.0.68.244:6443: read: connection reset by peer
2026-04-20T14:37:30Z node/ip-10-0-74-68.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-hzc0q9xt-98515.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-74-68.us-east-2.compute.internal?timeout=10s - read tcp 10.0.74.68:38008->10.0.26.224:6443: read: connection reset by peer
2026-04-20T15:12:14Z node/ip-10-0-74-68.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-hzc0q9xt-98515.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-74-68.us-east-2.compute.internal?timeout=10s - read tcp 10.0.74.68:35866->10.0.26.224:6443: read: connection reset by peer
2026-04-20T15:27:09Z node/ip-10-0-74-68.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-hzc0q9xt-98515.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-74-68.us-east-2.compute.internal?timeout=10s - read tcp 10.0.74.68:56076->10.0.68.244:6443: read: connection reset by peer
2026-04-20T15:27:10Z node/ip-10-0-21-193.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-hzc0q9xt-98515.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-21-193.us-east-2.compute.internal?timeout=10s - read tcp 10.0.21.193:35542->10.0.68.244:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv4 (all) - 10 runs, 30% failed, 33% of failures match = 10% impact
#2046198031388250112junit5 hours ago
2026-04-20T15:09:38Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T15:09:42Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:47954->192.168.111.5:6443: read: connection reset by peer
2026-04-20T15:09:43Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:37134->192.168.111.5:6443: read: connection reset by peer
2026-04-20T15:09:44Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T15:09:48Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:58512->192.168.111.5:6443: read: connection reset by peer
2026-04-20T15:09:48Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:47890->192.168.111.5:6443: read: connection reset by peer
2026-04-20T15:09:49Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - http2: client connection lost
#2046198031388250112junit5 hours ago
2026-04-20T15:10:25Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T15:10:32Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:55532->192.168.111.5:6443: read: connection reset by peer
2026-04-20T15:10:33Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046198031388250112junit5 hours ago
2026-04-20T15:44:59Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T15:44:59Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:38998->192.168.111.5:6443: read: connection reset by peer
2026-04-20T15:45:06Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:39242->192.168.111.5:6443: read: connection reset by peer
2026-04-20T15:45:32Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-multiarch-main-nightly-4.18-ocp-e2e-aws-ovn-multi-x-ax (all) - 1 runs, 0% failed, 100% of runs match
#2046224857271635968junit5 hours ago
2026-04-20T14:41:57Z node/ip-10-0-40-247.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jn3tvvzd-969b9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-40-247.us-west-2.compute.internal?timeout=10s - read tcp 10.0.40.247:50390->10.0.79.88:6443: read: connection reset by peer
2026-04-20T14:42:01Z node/ip-10-0-100-68.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jn3tvvzd-969b9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-68.us-west-2.compute.internal?timeout=10s - read tcp 10.0.100.68:52982->10.0.30.12:6443: read: connection reset by peer
2026-04-20T14:42:07Z node/ip-10-0-38-112.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jn3tvvzd-969b9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-38-112.us-west-2.compute.internal?timeout=10s - read tcp 10.0.38.112:52438->10.0.30.12:6443: read: connection reset by peer
2026-04-20T14:42:32Z node/ip-10-0-64-19.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jn3tvvzd-969b9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-64-19.us-west-2.compute.internal?timeout=10s - read tcp 10.0.64.19:44672->10.0.79.88:6443: read: connection reset by peer
pull-ci-openshift-cluster-network-operator-release-4.18-4.18-upgrade-from-stable-4.17-e2e-aws-ovn-upgrade-ipsec (all) - 1 runs, 0% failed, 100% of runs match
#2046216652789387264junit5 hours ago
Apr 20 14:45:13.353 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers: Patch "https://api-int.ci-op-i30yhxi7-256b0.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/multus-cluster-readers?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.21.43:60648->10.0.28.92:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 20 14:45:13.353 - 26s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers: Patch "https://api-int.ci-op-i30yhxi7-256b0.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/multus-cluster-readers?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.21.43:60648->10.0.28.92:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 20 14:45:39.706 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2046216652789387264junit5 hours ago
2026-04-20T14:45:03Z node/ip-10-0-19-146.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i30yhxi7-256b0.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-19-146.us-west-1.compute.internal?timeout=10s - read tcp 10.0.19.146:54124->10.0.28.92:6443: read: connection reset by peer
2026-04-20T14:49:09Z node/ip-10-0-21-43.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i30yhxi7-256b0.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-21-43.us-west-1.compute.internal?timeout=10s - read tcp 10.0.21.43:54150->10.0.28.92:6443: read: connection reset by peer
2026-04-20T14:49:28Z node/ip-10-0-19-146.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i30yhxi7-256b0.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-19-146.us-west-1.compute.internal?timeout=10s - read tcp 10.0.19.146:59054->10.0.28.92:6443: read: connection reset by peer
2026-04-20T15:48:42Z node/ip-10-0-20-184.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i30yhxi7-256b0.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-20-184.us-west-1.compute.internal?timeout=10s - read tcp 10.0.20.184:42768->10.0.28.92:6443: read: connection reset by peer
pull-ci-openshift-api-release-4.22-e2e-aws-ovn-hypershift-conformance (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#2046219378797580288junit5 hours ago
# [sig-node] [DRA] kubelet [Feature:DynamicResourceAllocation] on single node with multiple claims allocation requests an already allocated and a new claim for a pod [KubeletMinVersion:1.35]
fail [k8s.io/kubernetes/test/e2e/dra/utils/builder.go:522]: delete claim: Delete "https://aef4949f3b0ab4b2c8e79820e9a56d12-ff374e3f32c2857e.elb.us-east-1.amazonaws.com:6443/apis/resource.k8s.io/v1/namespaces/e2e-dra-3441/resourceclaims/external-claim-2": read tcp 172.24.3.53:47288->54.157.232.233:6443: read: connection reset by peer
periodic-ci-openshift-multiarch-main-nightly-4.20-ocp-e2e-aws-ovn-techpreview-multi-a-a (all) - 1 runs, 0% failed, 100% of runs match
#2046207626022227968junit5 hours ago
2026-04-20T14:00:20Z node/ip-10-0-32-152.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x5wlib5d-7a5d3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-32-152.us-west-2.compute.internal?timeout=10s - read tcp 10.0.32.152:41010->10.0.39.68:6443: read: connection reset by peer
periodic-ci-openshift-multiarch-main-nightly-4.20-ocp-e2e-aws-ovn-techpreview-serial-multi-a-a-1of3 (all) - 1 runs, 0% failed, 100% of runs match
#2046207626076753920junit5 hours ago
2026-04-20T14:34:52Z node/ip-10-0-74-72.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i0k4k1wn-2faa8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-74-72.us-east-2.compute.internal?timeout=10s - read tcp 10.0.74.72:47774->10.0.127.250:6443: read: connection reset by peer
2026-04-20T14:39:12Z node/ip-10-0-65-59.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i0k4k1wn-2faa8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-65-59.us-east-2.compute.internal?timeout=10s - read tcp 10.0.65.59:53634->10.0.127.250:6443: read: connection reset by peer
2026-04-20T14:43:07Z node/ip-10-0-79-62.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i0k4k1wn-2faa8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-79-62.us-east-2.compute.internal?timeout=10s - read tcp 10.0.79.62:42704->10.0.4.77:6443: read: connection reset by peer
2026-04-20T14:43:12Z node/ip-10-0-31-135.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i0k4k1wn-2faa8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-31-135.us-east-2.compute.internal?timeout=10s - read tcp 10.0.31.135:50122->10.0.127.250:6443: read: connection reset by peer
2026-04-20T14:47:43Z node/ip-10-0-79-62.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i0k4k1wn-2faa8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-79-62.us-east-2.compute.internal?timeout=10s - read tcp 10.0.79.62:51716->10.0.127.250:6443: read: connection reset by peer
2026-04-20T14:47:55Z node/ip-10-0-93-68.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i0k4k1wn-2faa8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-93-68.us-east-2.compute.internal?timeout=10s - read tcp 10.0.93.68:54956->10.0.127.250:6443: read: connection reset by peer
2026-04-20T15:29:48Z node/ip-10-0-93-68.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i0k4k1wn-2faa8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-93-68.us-east-2.compute.internal?timeout=10s - context deadline exceeded
periodic-ci-openshift-hypershift-release-4.21-periodics-mce-e2e-aws-critical (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#2046192494319767552junit6 hours ago
Apr 20 13:14:35.937 - 35s   E backend-disruption-name/cache-kube-api-reused-connections connection/reused disruption/openshift-tests reason/DisruptionBegan request-audit-id/649e85dc-f035-4aaa-8dbc-3ee935733f64 backend-disruption-name/cache-kube-api-reused-connections connection/reused disruption/openshift-tests stopped responding to GET requests over reused connections: Get "https://a1a256fa3e9ff4ab6b350f82b75f2371-8c5b66042e4725ae.elb.us-east-2.amazonaws.com:6443/api/v1/namespaces/default?resourceVersion=11169": net/http: TLS handshake timeout
Apr 20 13:15:11.937 - 999ms E backend-disruption-name/cache-kube-api-reused-connections connection/reused disruption/openshift-tests reason/DisruptionBegan request-audit-id/8d5ae3a1-ea65-4ee2-ad49-ca1abf7f7d06 backend-disruption-name/cache-kube-api-reused-connections connection/reused disruption/openshift-tests stopped responding to GET requests over reused connections: Get "https://a1a256fa3e9ff4ab6b350f82b75f2371-8c5b66042e4725ae.elb.us-east-2.amazonaws.com:6443/api/v1/namespaces/default?resourceVersion=11169": read tcp 10.131.78.196:47752->3.135.149.54:6443: read: connection reset by peer
Apr 20 13:15:12.936 - 1s    E backend-disruption-name/cache-kube-api-reused-connections connection/reused disruption/openshift-tests reason/DisruptionBegan request-audit-id/6b230a0f-9118-455d-9002-d53e0683f19d backend-disruption-name/cache-kube-api-reused-connections connection/reused disruption/openshift-tests stopped responding to GET requests over reused connections: Get "https://a1a256fa3e9ff4ab6b350f82b75f2371-8c5b66042e4725ae.elb.us-east-2.amazonaws.com:6443/api/v1/namespaces/default?resourceVersion=11169": net/http: TLS handshake timeout
#2046192494319767552junit6 hours ago
Apr 20 13:14:34.936 - 37s   E backend-disruption-name/openshift-api-reused-connections connection/reused disruption/openshift-tests reason/DisruptionBegan request-audit-id/f4a46b64-32eb-4761-a975-f8064292498e backend-disruption-name/openshift-api-reused-connections connection/reused disruption/openshift-tests stopped responding to GET requests over reused connections: Get "https://a1a256fa3e9ff4ab6b350f82b75f2371-8c5b66042e4725ae.elb.us-east-2.amazonaws.com:6443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams": net/http: TLS handshake timeout
Apr 20 13:15:11.937 - 999ms E backend-disruption-name/openshift-api-reused-connections connection/reused disruption/openshift-tests reason/DisruptionBegan request-audit-id/c4f1b793-61c5-487d-b4d5-556da5abad27 backend-disruption-name/openshift-api-reused-connections connection/reused disruption/openshift-tests stopped responding to GET requests over reused connections: Get "https://a1a256fa3e9ff4ab6b350f82b75f2371-8c5b66042e4725ae.elb.us-east-2.amazonaws.com:6443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams": read tcp 10.131.78.196:47712->3.135.149.54:6443: read: connection reset by peer
Apr 20 13:15:12.937 - 999ms E backend-disruption-name/openshift-api-reused-connections connection/reused disruption/openshift-tests reason/DisruptionBegan request-audit-id/82feda5d-76ca-4b29-a1f7-79f8cba4be99 backend-disruption-name/openshift-api-reused-connections connection/reused disruption/openshift-tests stopped responding to GET requests over reused connections: Get "https://a1a256fa3e9ff4ab6b350f82b75f2371-8c5b66042e4725ae.elb.us-east-2.amazonaws.com:6443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams": net/http: TLS handshake timeout
#2046192494319767552junit6 hours ago
Apr 20 13:14:34.936 - 37s   E backend-disruption-name/cache-openshift-api-reused-connections connection/reused disruption/openshift-tests reason/DisruptionBegan request-audit-id/3fbdcdfb-edbf-4fe9-81dc-f0923682e3b2 backend-disruption-name/cache-openshift-api-reused-connections connection/reused disruption/openshift-tests stopped responding to GET requests over reused connections: Get "https://a1a256fa3e9ff4ab6b350f82b75f2371-8c5b66042e4725ae.elb.us-east-2.amazonaws.com:6443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/cli?resourceVersion=9156": net/http: TLS handshake timeout
Apr 20 13:15:11.937 - 999ms E backend-disruption-name/cache-openshift-api-reused-connections connection/reused disruption/openshift-tests reason/DisruptionBegan request-audit-id/811068be-3fe7-491a-ad32-e52c63a7b1da backend-disruption-name/cache-openshift-api-reused-connections connection/reused disruption/openshift-tests stopped responding to GET requests over reused connections: Get "https://a1a256fa3e9ff4ab6b350f82b75f2371-8c5b66042e4725ae.elb.us-east-2.amazonaws.com:6443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/cli?resourceVersion=9156": read tcp 10.131.78.196:47798->3.135.149.54:6443: read: connection reset by peer
Apr 20 13:15:12.936 - 999ms E backend-disruption-name/cache-openshift-api-reused-connections connection/reused disruption/openshift-tests reason/DisruptionBegan request-audit-id/5a952216-2ce2-4852-80b4-f84ca1f7b0fa backend-disruption-name/cache-openshift-api-reused-connections connection/reused disruption/openshift-tests stopped responding to GET requests over reused connections: Get "https://a1a256fa3e9ff4ab6b350f82b75f2371-8c5b66042e4725ae.elb.us-east-2.amazonaws.com:6443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/cli?resourceVersion=9156": net/http: TLS handshake timeout
#2046192494319767552junit6 hours ago
Apr 20 13:14:34.936 - 37s   E backend-disruption-name/oauth-api-reused-connections connection/reused disruption/openshift-tests reason/DisruptionBegan request-audit-id/99757dfc-f8dc-4427-aafe-94d6ed4243ae backend-disruption-name/oauth-api-reused-connections connection/reused disruption/openshift-tests stopped responding to GET requests over reused connections: Get "https://a1a256fa3e9ff4ab6b350f82b75f2371-8c5b66042e4725ae.elb.us-east-2.amazonaws.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: TLS handshake timeout
Apr 20 13:15:11.937 - 999ms E backend-disruption-name/oauth-api-reused-connections connection/reused disruption/openshift-tests reason/DisruptionBegan request-audit-id/873be4a5-24a8-4b5c-9326-c636ee27d615 backend-disruption-name/oauth-api-reused-connections connection/reused disruption/openshift-tests stopped responding to GET requests over reused connections: Get "https://a1a256fa3e9ff4ab6b350f82b75f2371-8c5b66042e4725ae.elb.us-east-2.amazonaws.com:6443/apis/oauth.openshift.io/v1/oauthclients": read tcp 10.131.78.196:47716->3.135.149.54:6443: read: connection reset by peer
Apr 20 13:15:12.937 - 999ms E backend-disruption-name/oauth-api-reused-connections connection/reused disruption/openshift-tests reason/DisruptionBegan request-audit-id/7336a146-cf45-47e4-b430-3a8d1a78530f backend-disruption-name/oauth-api-reused-connections connection/reused disruption/openshift-tests stopped responding to GET requests over reused connections: Get "https://a1a256fa3e9ff4ab6b350f82b75f2371-8c5b66042e4725ae.elb.us-east-2.amazonaws.com:6443/apis/oauth.openshift.io/v1/oauthclients": net/http: TLS handshake timeout
periodic-ci-openshift-release-main-nightly-4.21-e2e-agent-ha-dualstack-conformance (all) - 4 runs, 50% failed, 100% of failures match = 50% impact
#2046197533427896320junit5 hours ago
2026-04-20T15:00:42Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.81:44866->192.168.111.5:6443: read: connection reset by peer
2026-04-20T15:00:43Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - http2: client connection force closed via ClientConn.Close
#2045654098673405952junit42 hours ago
2026-04-19T00:59:30Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.81:39952->192.168.111.5:6443: read: connection reset by peer
2026-04-19T00:59:41Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.81:45934->192.168.111.5:6443: read: connection reset by peer
2026-04-19T00:59:51Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - context deadline exceeded
periodic-ci-openshift-multiarch-main-nightly-4.20-ocp-e2e-serial-aws-ovn-multi-x-ax (all) - 6 runs, 50% failed, 200% of failures match = 100% impact
#2046174115617837056junit6 hours ago
2026-04-20T11:22:22Z node/ip-10-0-11-191.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0y6n74k4-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-191.us-west-1.compute.internal?timeout=10s - read tcp 10.0.11.191:35278->10.0.12.167:6443: read: connection reset by peer
2026-04-20T11:26:20Z node/ip-10-0-114-220.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0y6n74k4-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-114-220.us-west-1.compute.internal?timeout=10s - read tcp 10.0.114.220:42292->10.0.107.162:6443: read: connection reset by peer
2026-04-20T11:26:25Z node/ip-10-0-76-209.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0y6n74k4-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-76-209.us-west-1.compute.internal?timeout=10s - read tcp 10.0.76.209:54166->10.0.12.167:6443: read: connection reset by peer
2026-04-20T14:01:24Z node/ip-10-0-44-154.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0y6n74k4-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-44-154.us-west-1.compute.internal?timeout=10s - read tcp 10.0.44.154:47712->10.0.12.167:6443: read: connection reset by peer
2026-04-20T14:06:01Z node/ip-10-0-76-209.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0y6n74k4-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-76-209.us-west-1.compute.internal?timeout=10s - read tcp 10.0.76.209:54588->10.0.107.162:6443: read: connection reset by peer
2026-04-20T14:06:06Z node/ip-10-0-27-222.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0y6n74k4-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-27-222.us-west-1.compute.internal?timeout=10s - read tcp 10.0.27.222:50226->10.0.12.167:6443: read: connection reset by peer
2026-04-20T14:10:06Z node/ip-10-0-76-209.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0y6n74k4-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-76-209.us-west-1.compute.internal?timeout=10s - read tcp 10.0.76.209:54678->10.0.107.162:6443: read: connection reset by peer
2026-04-20T14:10:08Z node/ip-10-0-37-152.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0y6n74k4-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-37-152.us-west-1.compute.internal?timeout=10s - read tcp 10.0.37.152:50244->10.0.12.167:6443: read: connection reset by peer
2026-04-20T14:10:15Z node/ip-10-0-44-154.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0y6n74k4-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-44-154.us-west-1.compute.internal?timeout=10s - read tcp 10.0.44.154:33996->10.0.107.162:6443: read: connection reset by peer
2026-04-20T14:10:18Z node/ip-10-0-114-220.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0y6n74k4-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-114-220.us-west-1.compute.internal?timeout=10s - read tcp 10.0.114.220:45806->10.0.12.167:6443: read: connection reset by peer
2026-04-20T14:14:02Z node/ip-10-0-114-220.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0y6n74k4-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-114-220.us-west-1.compute.internal?timeout=10s - read tcp 10.0.114.220:42674->10.0.12.167:6443: read: connection reset by peer
2026-04-20T14:14:05Z node/ip-10-0-27-222.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0y6n74k4-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-27-222.us-west-1.compute.internal?timeout=10s - read tcp 10.0.27.222:38012->10.0.12.167:6443: read: connection reset by peer
2026-04-20T14:14:13Z node/ip-10-0-114-220.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0y6n74k4-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-114-220.us-west-1.compute.internal?timeout=10s - read tcp 10.0.114.220:42720->10.0.12.167:6443: read: connection reset by peer
2026-04-20T14:58:52Z node/ip-10-0-44-154.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0y6n74k4-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-44-154.us-west-1.compute.internal?timeout=10s - context deadline exceeded
#2046041621119635456junit15 hours ago
Apr 20 04:17:01.384 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script: failed to apply / update (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script: Patch "https://api-int.ci-op-55w2fdtz-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-network-operator/configmaps/iptables-alerter-script?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.36.119:47148->10.0.103.133:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 20 04:17:01.384 - 28s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script: failed to apply / update (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script: Patch "https://api-int.ci-op-55w2fdtz-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-network-operator/configmaps/iptables-alerter-script?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.36.119:47148->10.0.103.133:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 20 04:17:29.386 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2046041621119635456junit15 hours ago
2026-04-20T04:20:46Z node/ip-10-0-126-122.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-55w2fdtz-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-126-122.ec2.internal?timeout=10s - read tcp 10.0.126.122:37062->10.0.56.13:6443: read: connection reset by peer
2026-04-20T04:24:47Z node/ip-10-0-31-31.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-55w2fdtz-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-31-31.ec2.internal?timeout=10s - read tcp 10.0.31.31:51416->10.0.103.133:6443: read: connection reset by peer
2026-04-20T04:24:47Z node/ip-10-0-103-226.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-55w2fdtz-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-226.ec2.internal?timeout=10s - read tcp 10.0.103.226:36698->10.0.56.13:6443: read: connection reset by peer
2026-04-20T04:29:36Z node/ip-10-0-36-119.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-55w2fdtz-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-119.ec2.internal?timeout=10s - read tcp 10.0.36.119:53454->10.0.103.133:6443: read: connection reset by peer
2026-04-20T04:29:38Z node/ip-10-0-21-180.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-55w2fdtz-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-21-180.ec2.internal?timeout=10s - read tcp 10.0.21.180:42370->10.0.103.133:6443: read: connection reset by peer
2026-04-20T04:29:40Z node/ip-10-0-99-180.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-55w2fdtz-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-99-180.ec2.internal?timeout=10s - read tcp 10.0.99.180:36726->10.0.103.133:6443: read: connection reset by peer
2026-04-20T04:33:27Z node/ip-10-0-103-226.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-55w2fdtz-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-226.ec2.internal?timeout=10s - read tcp 10.0.103.226:52742->10.0.56.13:6443: read: connection reset by peer
2026-04-20T04:33:42Z node/ip-10-0-101-42.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-55w2fdtz-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-101-42.ec2.internal?timeout=10s - read tcp 10.0.101.42:45372->10.0.56.13:6443: read: connection reset by peer
2026-04-20T04:33:58Z node/ip-10-0-29-10.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-55w2fdtz-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-29-10.ec2.internal?timeout=10s - read tcp 10.0.29.10:40530->10.0.56.13:6443: read: connection reset by peer
2026-04-20T04:37:31Z node/ip-10-0-31-31.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-55w2fdtz-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-31-31.ec2.internal?timeout=10s - read tcp 10.0.31.31:58364->10.0.56.13:6443: read: connection reset by peer
2026-04-20T04:37:36Z node/ip-10-0-36-119.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-55w2fdtz-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-119.ec2.internal?timeout=10s - read tcp 10.0.36.119:51884->10.0.56.13:6443: read: connection reset by peer
2026-04-20T04:37:37Z node/ip-10-0-21-180.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-55w2fdtz-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-21-180.ec2.internal?timeout=10s - read tcp 10.0.21.180:46284->10.0.103.133:6443: read: connection reset by peer
2026-04-20T04:37:42Z node/ip-10-0-29-10.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-55w2fdtz-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-29-10.ec2.internal?timeout=10s - read tcp 10.0.29.10:35692->10.0.103.133:6443: read: connection reset by peer
2026-04-20T05:26:57Z node/ip-10-0-21-180.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-55w2fdtz-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-21-180.ec2.internal?timeout=10s - context deadline exceeded
#2045956872602652672junit21 hours ago
2026-04-19T20:57:27Z node/ip-10-0-89-255.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5t8wffns-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-255.ec2.internal?timeout=10s - read tcp 10.0.89.255:60374->10.0.102.66:6443: read: connection reset by peer
2026-04-19T20:57:51Z node/ip-10-0-40-201.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5t8wffns-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-40-201.ec2.internal?timeout=10s - read tcp 10.0.40.201:51190->10.0.102.66:6443: read: connection reset by peer
2026-04-19T22:52:32Z node/ip-10-0-61-100.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5t8wffns-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-100.ec2.internal?timeout=10s - read tcp 10.0.61.100:55444->10.0.62.57:6443: read: connection reset by peer
2026-04-19T22:56:30Z node/ip-10-0-108-34.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5t8wffns-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-108-34.ec2.internal?timeout=10s - read tcp 10.0.108.34:34490->10.0.102.66:6443: read: connection reset by peer
2026-04-19T23:00:52Z node/ip-10-0-97-181.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5t8wffns-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-97-181.ec2.internal?timeout=10s - read tcp 10.0.97.181:52862->10.0.102.66:6443: read: connection reset by peer
2026-04-19T23:04:47Z node/ip-10-0-89-255.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5t8wffns-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-255.ec2.internal?timeout=10s - read tcp 10.0.89.255:51098->10.0.102.66:6443: read: connection reset by peer
2026-04-19T23:09:07Z node/ip-10-0-108-34.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5t8wffns-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-108-34.ec2.internal?timeout=10s - read tcp 10.0.108.34:47218->10.0.102.66:6443: read: connection reset by peer
2026-04-20T00:01:11Z node/ip-10-0-108-34.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5t8wffns-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-108-34.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045822619051102208junit30 hours ago
2026-04-19T13:47:04Z node/ip-10-0-63-239.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-48gy81vt-c64b8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-63-239.ec2.internal?timeout=10s - read tcp 10.0.63.239:53508->10.0.1.14:6443: read: connection reset by peer
2026-04-19T13:47:14Z node/ip-10-0-0-94.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-48gy81vt-c64b8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-94.ec2.internal?timeout=10s - read tcp 10.0.0.94:49964->10.0.64.140:6443: read: connection reset by peer
2026-04-19T13:51:09Z node/ip-10-0-127-137.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-48gy81vt-c64b8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-137.ec2.internal?timeout=10s - read tcp 10.0.127.137:47590->10.0.64.140:6443: read: connection reset by peer
2026-04-19T13:51:14Z node/ip-10-0-10-11.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-48gy81vt-c64b8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-10-11.ec2.internal?timeout=10s - read tcp 10.0.10.11:48240->10.0.64.140:6443: read: connection reset by peer
2026-04-19T13:55:45Z node/ip-10-0-127-137.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-48gy81vt-c64b8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-137.ec2.internal?timeout=10s - read tcp 10.0.127.137:42890->10.0.64.140:6443: read: connection reset by peer
2026-04-19T13:55:47Z node/ip-10-0-88-180.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-48gy81vt-c64b8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-88-180.ec2.internal?timeout=10s - read tcp 10.0.88.180:53910->10.0.64.140:6443: read: connection reset by peer
2026-04-19T13:55:53Z node/ip-10-0-0-94.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-48gy81vt-c64b8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-94.ec2.internal?timeout=10s - read tcp 10.0.0.94:44934->10.0.1.14:6443: read: connection reset by peer
2026-04-19T13:55:57Z node/ip-10-0-88-180.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-48gy81vt-c64b8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-88-180.ec2.internal?timeout=10s - read tcp 10.0.88.180:39226->10.0.1.14:6443: read: connection reset by peer
2026-04-19T13:55:58Z node/ip-10-0-63-98.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-48gy81vt-c64b8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-63-98.ec2.internal?timeout=10s - read tcp 10.0.63.98:44164->10.0.64.140:6443: read: connection reset by peer
2026-04-19T13:56:16Z node/ip-10-0-63-239.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-48gy81vt-c64b8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-63-239.ec2.internal?timeout=10s - read tcp 10.0.63.239:38922->10.0.1.14:6443: read: connection reset by peer
2026-04-19T13:59:50Z node/ip-10-0-63-239.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-48gy81vt-c64b8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-63-239.ec2.internal?timeout=10s - read tcp 10.0.63.239:37022->10.0.64.140:6443: read: connection reset by peer
2026-04-19T14:03:35Z node/ip-10-0-63-239.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-48gy81vt-c64b8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-63-239.ec2.internal?timeout=10s - read tcp 10.0.63.239:43974->10.0.1.14:6443: read: connection reset by peer
2026-04-19T14:03:42Z node/ip-10-0-0-94.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-48gy81vt-c64b8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-94.ec2.internal?timeout=10s - read tcp 10.0.0.94:48484->10.0.1.14:6443: read: connection reset by peer
2026-04-19T14:50:10Z node/ip-10-0-23-152.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-48gy81vt-c64b8.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-152.ec2.internal?timeout=10s - context deadline exceeded
#2045686171207471104junit39 hours ago
Apr 19 04:55:25.491 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io: failed to apply / update (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io: Patch "https://api-int.ci-op-wckitcj1-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/overlappingrangeipreservations.whereabouts.cni.cncf.io?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.92.174:55968->10.0.126.210:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 04:55:25.491 - 28s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io: failed to apply / update (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io: Patch "https://api-int.ci-op-wckitcj1-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/overlappingrangeipreservations.whereabouts.cni.cncf.io?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.92.174:55968->10.0.126.210:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 04:55:53.499 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2045686171207471104junit39 hours ago
2026-04-19T02:57:47Z node/ip-10-0-94-153.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wckitcj1-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-94-153.ec2.internal?timeout=10s - read tcp 10.0.94.153:38400->10.0.29.79:6443: read: connection reset by peer
2026-04-19T04:46:26Z node/ip-10-0-30-84.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wckitcj1-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-84.ec2.internal?timeout=10s - read tcp 10.0.30.84:37960->10.0.126.210:6443: read: connection reset by peer
2026-04-19T04:51:10Z node/ip-10-0-25-36.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wckitcj1-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-36.ec2.internal?timeout=10s - read tcp 10.0.25.36:50226->10.0.29.79:6443: read: connection reset by peer
2026-04-19T04:51:11Z node/ip-10-0-30-84.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wckitcj1-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-84.ec2.internal?timeout=10s - read tcp 10.0.30.84:38146->10.0.126.210:6443: read: connection reset by peer
2026-04-19T04:51:20Z node/ip-10-0-24-250.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wckitcj1-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-250.ec2.internal?timeout=10s - read tcp 10.0.24.250:45446->10.0.126.210:6443: read: connection reset by peer
2026-04-19T04:54:58Z node/ip-10-0-37-106.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wckitcj1-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-37-106.ec2.internal?timeout=10s - read tcp 10.0.37.106:57730->10.0.29.79:6443: read: connection reset by peer
2026-04-19T04:55:02Z node/ip-10-0-7-249.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wckitcj1-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-7-249.ec2.internal?timeout=10s - read tcp 10.0.7.249:43916->10.0.29.79:6443: read: connection reset by peer
2026-04-19T04:55:04Z node/ip-10-0-24-250.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wckitcj1-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-250.ec2.internal?timeout=10s - read tcp 10.0.24.250:57966->10.0.29.79:6443: read: connection reset by peer
2026-04-19T04:55:06Z node/ip-10-0-30-136.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wckitcj1-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-136.ec2.internal?timeout=10s - read tcp 10.0.30.136:33230->10.0.126.210:6443: read: connection reset by peer
2026-04-19T04:55:06Z node/ip-10-0-94-153.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wckitcj1-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-94-153.ec2.internal?timeout=10s - read tcp 10.0.94.153:55426->10.0.126.210:6443: read: connection reset by peer
2026-04-19T04:55:07Z node/ip-10-0-92-174.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wckitcj1-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-92-174.ec2.internal?timeout=10s - read tcp 10.0.92.174:55392->10.0.126.210:6443: read: connection reset by peer
2026-04-19T04:55:08Z node/ip-10-0-30-84.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wckitcj1-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-84.ec2.internal?timeout=10s - read tcp 10.0.30.84:43028->10.0.29.79:6443: read: connection reset by peer
2026-04-19T04:59:02Z node/ip-10-0-30-84.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wckitcj1-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-84.ec2.internal?timeout=10s - read tcp 10.0.30.84:40368->10.0.29.79:6443: read: connection reset by peer
2026-04-19T05:18:24Z node/ip-10-0-24-250.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-wckitcj1-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-250.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045605683293851648junit44 hours ago
2026-04-18T23:24:06Z node/ip-10-0-13-52.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-yy7g9m32-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-52.ec2.internal?timeout=10s - read tcp 10.0.13.52:54364->10.0.94.132:6443: read: connection reset by peer
2026-04-18T23:31:42Z node/ip-10-0-104-184.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-yy7g9m32-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-104-184.ec2.internal?timeout=10s - read tcp 10.0.104.184:52184->10.0.0.11:6443: read: connection reset by peer
2026-04-18T23:31:48Z node/ip-10-0-36-237.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-yy7g9m32-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-237.ec2.internal?timeout=10s - read tcp 10.0.36.237:47866->10.0.0.11:6443: read: connection reset by peer
2026-04-18T23:36:21Z node/ip-10-0-30-101.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-yy7g9m32-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-101.ec2.internal?timeout=10s - read tcp 10.0.30.101:49968->10.0.94.132:6443: read: connection reset by peer
2026-04-18T23:36:26Z node/ip-10-0-109-14.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-yy7g9m32-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-14.ec2.internal?timeout=10s - read tcp 10.0.109.14:37448->10.0.94.132:6443: read: connection reset by peer
2026-04-18T23:36:31Z node/ip-10-0-30-101.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-yy7g9m32-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-101.ec2.internal?timeout=10s - read tcp 10.0.30.101:50260->10.0.94.132:6443: read: connection reset by peer
2026-04-18T23:40:11Z node/ip-10-0-16-42.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-yy7g9m32-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-42.ec2.internal?timeout=10s - read tcp 10.0.16.42:60694->10.0.0.11:6443: read: connection reset by peer
2026-04-18T23:40:20Z node/ip-10-0-36-237.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-yy7g9m32-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-237.ec2.internal?timeout=10s - read tcp 10.0.36.237:57126->10.0.0.11:6443: read: connection reset by peer
2026-04-18T23:40:25Z node/ip-10-0-33-35.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-yy7g9m32-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-33-35.ec2.internal?timeout=10s - read tcp 10.0.33.35:44078->10.0.94.132:6443: read: connection reset by peer
2026-04-18T23:40:25Z node/ip-10-0-0-150.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-yy7g9m32-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-150.ec2.internal?timeout=10s - read tcp 10.0.0.150:33638->10.0.0.11:6443: read: connection reset by peer
2026-04-18T23:40:25Z node/ip-10-0-30-101.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-yy7g9m32-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-101.ec2.internal?timeout=10s - read tcp 10.0.30.101:36382->10.0.0.11:6443: read: connection reset by peer
2026-04-18T23:40:33Z node/ip-10-0-104-184.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-yy7g9m32-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-104-184.ec2.internal?timeout=10s - read tcp 10.0.104.184:33232->10.0.94.132:6443: read: connection reset by peer
2026-04-18T23:44:10Z node/ip-10-0-30-101.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-yy7g9m32-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-101.ec2.internal?timeout=10s - read tcp 10.0.30.101:36568->10.0.0.11:6443: read: connection reset by peer
2026-04-19T00:32:06Z node/ip-10-0-0-150.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-yy7g9m32-c64b8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-150.ec2.internal?timeout=10s - context deadline exceeded
pull-ci-openshift-cluster-network-operator-release-4.18-e2e-aws-ovn-upgrade-ipsec (all) - 1 runs, 0% failed, 100% of runs match
#2046216653103960064junit6 hours ago
2026-04-20T14:22:12Z node/ip-10-0-7-181.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-yjnkb09q-bd161.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-7-181.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T14:35:44Z node/ip-10-0-63-11.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-yjnkb09q-bd161.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-63-11.us-west-1.compute.internal?timeout=10s - read tcp 10.0.63.11:60596->10.0.17.240:6443: read: connection reset by peer
2026-04-20T14:35:46Z node/ip-10-0-13-92.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-yjnkb09q-bd161.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-92.us-west-1.compute.internal?timeout=10s - read tcp 10.0.13.92:37596->10.0.17.240:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.23-upgrade-from-stable-4.22-e2e-vsphere-ovn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#2046212156814266368junit6 hours ago
2026-04-20T14:57:16Z node/ci-op-tcz32m9c-ad1fa-ql97c-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-tcz32m9c-ad1fa.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-tcz32m9c-ad1fa-ql97c-master-0?timeout=10s - read tcp 10.93.152.101:49340->10.93.152.8:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.17-upgrade-from-stable-4.16-e2e-vsphere-ovn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#2046212660281741312junit6 hours ago
2026-04-20T14:53:00Z node/ci-op-gcvfk481-698b9-j7ls5-worker-0-62v94 - reason/FailedToUpdateLease https://api-int.ci-op-gcvfk481-698b9.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-gcvfk481-698b9-j7ls5-worker-0-62v94?timeout=10s - read tcp 10.93.251.97:32774->10.93.251.12:6443: read: connection reset by peer
2026-04-20T14:53:05Z node/ci-op-gcvfk481-698b9-j7ls5-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-gcvfk481-698b9.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-gcvfk481-698b9-j7ls5-master-2?timeout=10s - write tcp 10.93.251.86:60712->10.93.251.12:6443: write: connection reset by peer
periodic-ci-openshift-multiarch-main-nightly-4.16-upgrade-from-stable-4.15-ocp-e2e-aws-ovn-heterogeneous-upgrade (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#2046204859069239296junit6 hours ago
2026-04-20T13:59:43Z node/ip-10-0-70-59.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-jz6y9vim-7ab68.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-70-59.ec2.internal?timeout=10s - read tcp 10.0.70.59:43902->10.0.67.171:6443: read: connection reset by peer
2026-04-20T14:48:57Z node/ip-10-0-70-59.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-jz6y9vim-7ab68.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-70-59.ec2.internal?timeout=10s - read tcp 10.0.70.59:52312->10.0.35.36:6443: read: connection reset by peer
pull-ci-openshift-ovn-kubernetes-release-4.16-4.16-upgrade-from-stable-4.15-e2e-aws-ovn-upgrade (all) - 5 runs, 80% failed, 50% of failures match = 40% impact
#2046190238497247232junit6 hours ago
2026-04-20T12:47:34Z node/ip-10-0-9-56.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-p5ftymxr-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-56.us-east-2.compute.internal?timeout=10s - read tcp 10.0.9.56:40174->10.0.10.7:6443: read: connection reset by peer
2026-04-20T12:47:37Z node/ip-10-0-12-159.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-p5ftymxr-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-12-159.us-east-2.compute.internal?timeout=10s - read tcp 10.0.12.159:38234->10.0.10.7:6443: read: connection reset by peer
2026-04-20T12:47:37Z node/ip-10-0-12-159.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-p5ftymxr-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-12-159.us-east-2.compute.internal?timeout=10s - read tcp 10.0.12.159:38062->10.0.10.7:6443: read: connection reset by peer
2026-04-20T13:08:43Z node/ip-10-0-38-40.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-p5ftymxr-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-38-40.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046190238497247232junit6 hours ago
2026-04-20T13:08:46Z node/ip-10-0-32-93.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-p5ftymxr-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-32-93.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T13:08:48Z node/ip-10-0-32-93.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-p5ftymxr-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-32-93.us-east-2.compute.internal?timeout=10s - read tcp 10.0.32.93:59192->10.0.10.7:6443: read: connection reset by peer
2026-04-20T13:08:51Z node/ip-10-0-9-56.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-p5ftymxr-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-56.us-east-2.compute.internal?timeout=10s - read tcp 10.0.9.56:45818->10.0.10.7:6443: read: connection reset by peer
2026-04-20T13:08:51Z node/ip-10-0-10-89.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-p5ftymxr-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-10-89.us-east-2.compute.internal?timeout=10s - read tcp 10.0.10.89:52792->10.0.10.7:6443: read: connection reset by peer
2026-04-20T13:08:52Z node/ip-10-0-36-159.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-p5ftymxr-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-159.us-east-2.compute.internal?timeout=10s - read tcp 10.0.36.159:58050->10.0.10.7:6443: read: connection reset by peer
2026-04-20T13:08:54Z node/ip-10-0-12-159.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-p5ftymxr-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-12-159.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046104902215667712junit12 hours ago
2026-04-20T07:12:06Z node/ip-10-0-22-87.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6xiblpmx-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-87.us-east-2.compute.internal?timeout=10s - read tcp 10.0.22.87:46532->10.0.20.189:6443: read: connection reset by peer
2026-04-20T07:12:23Z node/ip-10-0-51-90.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6xiblpmx-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-90.us-east-2.compute.internal?timeout=10s - read tcp 10.0.51.90:34488->10.0.20.189:6443: read: connection reset by peer
2026-04-20T07:12:26Z node/ip-10-0-22-87.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6xiblpmx-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-87.us-east-2.compute.internal?timeout=10s - read tcp 10.0.22.87:46420->10.0.20.189:6443: read: connection reset by peer
2026-04-20T07:16:07Z node/ip-10-0-7-87.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6xiblpmx-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-7-87.us-east-2.compute.internal?timeout=10s - read tcp 10.0.7.87:57976->10.0.20.189:6443: read: connection reset by peer
2026-04-20T07:32:15Z node/ip-10-0-51-90.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6xiblpmx-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-90.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T07:32:19Z node/ip-10-0-22-87.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6xiblpmx-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-87.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T07:52:15Z node/ip-10-0-1-197.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6xiblpmx-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-1-197.us-east-2.compute.internal?timeout=10s - read tcp 10.0.1.197:57414->10.0.20.189:6443: read: connection reset by peer
#2046104902215667712junit12 hours ago
2026-04-20T07:12:06Z node/ip-10-0-22-87.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6xiblpmx-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-87.us-east-2.compute.internal?timeout=10s - read tcp 10.0.22.87:46532->10.0.20.189:6443: read: connection reset by peer
2026-04-20T07:12:23Z node/ip-10-0-51-90.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6xiblpmx-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-90.us-east-2.compute.internal?timeout=10s - read tcp 10.0.51.90:34488->10.0.20.189:6443: read: connection reset by peer
2026-04-20T07:12:26Z node/ip-10-0-22-87.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6xiblpmx-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-87.us-east-2.compute.internal?timeout=10s - read tcp 10.0.22.87:46420->10.0.20.189:6443: read: connection reset by peer
2026-04-20T07:16:07Z node/ip-10-0-7-87.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6xiblpmx-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-7-87.us-east-2.compute.internal?timeout=10s - read tcp 10.0.7.87:57976->10.0.20.189:6443: read: connection reset by peer
2026-04-20T07:32:15Z node/ip-10-0-51-90.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6xiblpmx-4997d.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-90.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-multiarch-main-nightly-4.20-upgrade-from-stable-4.19-ocp-e2e-upgrade-aws-ovn-multi-a-a (all) - 6 runs, 17% failed, 500% of failures match = 83% impact
#2046174073666408448junit6 hours ago
2026-04-20T12:25:40Z node/ip-10-0-29-190.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-kvt6rpsn-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-29-190.ec2.internal?timeout=10s - read tcp 10.0.29.190:45316->10.0.81.40:6443: read: connection reset by peer
2026-04-20T12:29:20Z node/ip-10-0-17-117.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-kvt6rpsn-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-17-117.ec2.internal?timeout=10s - read tcp 10.0.17.117:45936->10.0.23.34:6443: read: connection reset by peer
2026-04-20T12:29:25Z node/ip-10-0-6-41.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-kvt6rpsn-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-41.ec2.internal?timeout=10s - read tcp 10.0.6.41:35362->10.0.81.40:6443: read: connection reset by peer
2026-04-20T13:11:59Z node/ip-10-0-6-41.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-kvt6rpsn-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-41.ec2.internal?timeout=10s - read tcp 10.0.6.41:40974->10.0.81.40:6443: read: connection reset by peer
2026-04-20T13:19:06Z node/ip-10-0-17-117.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-kvt6rpsn-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-17-117.ec2.internal?timeout=10s - read tcp 10.0.17.117:37578->10.0.81.40:6443: read: connection reset by peer
#2046174073666408448junit6 hours ago
Apr 20 12:25:46.926 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy: Patch "https://api-int.ci-op-kvt6rpsn-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/openshift-ovn-kubernetes-kube-rbac-proxy?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.29.190:57586->10.0.23.34:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 20 12:25:46.926 - 27s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy: Patch "https://api-int.ci-op-kvt6rpsn-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/openshift-ovn-kubernetes-kube-rbac-proxy?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.29.190:57586->10.0.23.34:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 20 12:26:14.143 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2045956842378498048junit22 hours ago
2026-04-19T21:06:43Z node/ip-10-0-11-184.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gt2vyf51-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-184.ec2.internal?timeout=10s - read tcp 10.0.11.184:34434->10.0.49.64:6443: read: connection reset by peer
2026-04-19T21:10:38Z node/ip-10-0-117-220.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gt2vyf51-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-117-220.ec2.internal?timeout=10s - read tcp 10.0.117.220:38968->10.0.75.109:6443: read: connection reset by peer
2026-04-19T21:14:30Z node/ip-10-0-100-165.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gt2vyf51-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-165.ec2.internal?timeout=10s - read tcp 10.0.100.165:41038->10.0.75.109:6443: read: connection reset by peer
2026-04-19T21:14:33Z node/ip-10-0-117-220.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gt2vyf51-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-117-220.ec2.internal?timeout=10s - read tcp 10.0.117.220:41026->10.0.75.109:6443: read: connection reset by peer
2026-04-19T21:14:35Z node/ip-10-0-14-249.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gt2vyf51-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-14-249.ec2.internal?timeout=10s - read tcp 10.0.14.249:48778->10.0.49.64:6443: read: connection reset by peer
#2045956842378498048junit22 hours ago
2026-04-19T21:06:43Z node/ip-10-0-11-184.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gt2vyf51-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-184.ec2.internal?timeout=10s - read tcp 10.0.11.184:34434->10.0.49.64:6443: read: connection reset by peer
2026-04-19T21:10:38Z node/ip-10-0-117-220.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gt2vyf51-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-117-220.ec2.internal?timeout=10s - read tcp 10.0.117.220:38968->10.0.75.109:6443: read: connection reset by peer
2026-04-19T21:14:30Z node/ip-10-0-100-165.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gt2vyf51-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-165.ec2.internal?timeout=10s - read tcp 10.0.100.165:41038->10.0.75.109:6443: read: connection reset by peer
2026-04-19T21:14:33Z node/ip-10-0-117-220.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gt2vyf51-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-117-220.ec2.internal?timeout=10s - read tcp 10.0.117.220:41026->10.0.75.109:6443: read: connection reset by peer
2026-04-19T21:14:35Z node/ip-10-0-14-249.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gt2vyf51-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-14-249.ec2.internal?timeout=10s - read tcp 10.0.14.249:48778->10.0.49.64:6443: read: connection reset by peer
#2045822637212438528junit30 hours ago
2026-04-19T12:34:49Z node/ip-10-0-64-72.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-47bbmtl7-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-64-72.us-west-2.compute.internal?timeout=10s - read tcp 10.0.64.72:50294->10.0.18.46:6443: read: connection reset by peer
2026-04-19T13:28:38Z node/ip-10-0-18-6.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-47bbmtl7-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-18-6.us-west-2.compute.internal?timeout=10s - read tcp 10.0.18.6:56032->10.0.18.46:6443: read: connection reset by peer
2026-04-19T13:36:15Z node/ip-10-0-105-7.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-47bbmtl7-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-105-7.us-west-2.compute.internal?timeout=10s - read tcp 10.0.105.7:32896->10.0.113.1:6443: read: connection reset by peer
#2045822637212438528junit30 hours ago
Apr 19 13:21:28.504 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (console.openshift.io/v1, Kind=ConsolePlugin) /networking-console-plugin: failed to apply / update (console.openshift.io/v1, Kind=ConsolePlugin) /networking-console-plugin: Patch "https://api-int.ci-op-47bbmtl7-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/console.openshift.io/v1/consoleplugins/networking-console-plugin?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.109.52:51426->10.0.18.46:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 19 13:21:28.504 - 27s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (console.openshift.io/v1, Kind=ConsolePlugin) /networking-console-plugin: failed to apply / update (console.openshift.io/v1, Kind=ConsolePlugin) /networking-console-plugin: Patch "https://api-int.ci-op-47bbmtl7-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/console.openshift.io/v1/consoleplugins/networking-console-plugin?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.109.52:51426->10.0.18.46:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 19 13:21:56.448 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2045822637212438528junit30 hours ago
2026-04-19T12:34:49Z node/ip-10-0-64-72.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-47bbmtl7-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-64-72.us-west-2.compute.internal?timeout=10s - read tcp 10.0.64.72:50294->10.0.18.46:6443: read: connection reset by peer
2026-04-19T13:28:38Z node/ip-10-0-18-6.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-47bbmtl7-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-18-6.us-west-2.compute.internal?timeout=10s - read tcp 10.0.18.6:56032->10.0.18.46:6443: read: connection reset by peer
2026-04-19T13:36:15Z node/ip-10-0-105-7.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-47bbmtl7-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-105-7.us-west-2.compute.internal?timeout=10s - read tcp 10.0.105.7:32896->10.0.113.1:6443: read: connection reset by peer
#2045686171018727424junit39 hours ago
2026-04-19T03:23:45Z node/ip-10-0-101-169.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-n9ffnq83-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-101-169.ec2.internal?timeout=10s - read tcp 10.0.101.169:50060->10.0.30.96:6443: read: connection reset by peer
2026-04-19T04:07:31Z node/ip-10-0-101-169.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-n9ffnq83-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-101-169.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T04:19:33Z node/ip-10-0-109-252.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-n9ffnq83-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-252.ec2.internal?timeout=10s - read tcp 10.0.109.252:43268->10.0.30.96:6443: read: connection reset by peer
#2045686171018727424junit39 hours ago
2026-04-19T03:23:45Z node/ip-10-0-101-169.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-n9ffnq83-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-101-169.ec2.internal?timeout=10s - read tcp 10.0.101.169:50060->10.0.30.96:6443: read: connection reset by peer
2026-04-19T04:07:31Z node/ip-10-0-101-169.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-n9ffnq83-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-101-169.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T04:19:33Z node/ip-10-0-109-252.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-n9ffnq83-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-252.ec2.internal?timeout=10s - read tcp 10.0.109.252:43268->10.0.30.96:6443: read: connection reset by peer
#2045605706446409728junit45 hours ago
2026-04-18T22:11:52Z node/ip-10-0-84-201.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rccqsf0n-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-84-201.us-east-2.compute.internal?timeout=10s - read tcp 10.0.84.201:58026->10.0.1.15:6443: read: connection reset by peer
2026-04-18T22:16:04Z node/ip-10-0-80-92.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rccqsf0n-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-80-92.us-east-2.compute.internal?timeout=10s - read tcp 10.0.80.92:55966->10.0.85.216:6443: read: connection reset by peer
2026-04-18T22:20:08Z node/ip-10-0-100-134.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rccqsf0n-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-134.us-east-2.compute.internal?timeout=10s - read tcp 10.0.100.134:45718->10.0.1.15:6443: read: connection reset by peer
2026-04-18T23:01:14Z node/ip-10-0-55-189.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rccqsf0n-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-55-189.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-18T23:10:56Z node/ip-10-0-100-134.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rccqsf0n-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-134.us-east-2.compute.internal?timeout=10s - read tcp 10.0.100.134:36568->10.0.1.15:6443: read: connection reset by peer
2026-04-18T23:10:57Z node/ip-10-0-80-92.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rccqsf0n-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-80-92.us-east-2.compute.internal?timeout=10s - read tcp 10.0.80.92:52656->10.0.85.216:6443: read: connection reset by peer
2026-04-18T23:11:01Z node/ip-10-0-84-201.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rccqsf0n-d7ea5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-84-201.us-east-2.compute.internal?timeout=10s - read tcp 10.0.84.201:57334->10.0.85.216:6443: read: connection reset by peer
periodic-ci-openshift-multiarch-main-nightly-4.19-ocp-e2e-aws-ovn-multi-x-x (all) - 9 runs, 11% failed, 500% of failures match = 56% impact
#2046199866463358976junit6 hours ago
2026-04-20T13:39:11Z node/ip-10-0-70-226.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rdb0kbm1-304e8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-70-226.us-west-2.compute.internal?timeout=10s - read tcp 10.0.70.226:37044->10.0.99.208:6443: read: connection reset by peer
#2046165719225208832junit9 hours ago
2026-04-20T10:44:35Z node/ip-10-0-58-17.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ks9rqwxr-304e8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-17.us-east-2.compute.internal?timeout=10s - read tcp 10.0.58.17:54908->10.0.45.136:6443: read: connection reset by peer
2026-04-20T10:44:40Z node/ip-10-0-53-80.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ks9rqwxr-304e8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-80.us-east-2.compute.internal?timeout=10s - read tcp 10.0.53.80:35860->10.0.45.136:6443: read: connection reset by peer
#2046055012563423232junit16 hours ago
2026-04-20T03:30:57Z node/ip-10-0-22-112.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3fpwqlhv-304e8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-112.ec2.internal?timeout=10s - read tcp 10.0.22.112:58804->10.0.27.45:6443: read: connection reset by peer
2026-04-20T03:34:52Z node/ip-10-0-22-112.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3fpwqlhv-304e8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-112.ec2.internal?timeout=10s - read tcp 10.0.22.112:51598->10.0.27.45:6443: read: connection reset by peer
2026-04-20T03:35:00Z node/ip-10-0-5-160.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3fpwqlhv-304e8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-160.ec2.internal?timeout=10s - read tcp 10.0.5.160:39356->10.0.97.226:6443: read: connection reset by peer
#2045991802225299456junit21 hours ago
2026-04-19T23:14:33Z node/ip-10-0-65-88.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-tw46xsik-304e8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-65-88.ec2.internal?timeout=10s - read tcp 10.0.65.88:34000->10.0.59.61:6443: read: connection reset by peer
#2045651522162790400junit43 hours ago
2026-04-19T00:39:08Z node/ip-10-0-94-234.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w37900t0-304e8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-94-234.us-west-2.compute.internal?timeout=10s - read tcp 10.0.94.234:49382->10.0.9.39:6443: read: connection reset by peer
2026-04-19T00:39:23Z node/ip-10-0-65-46.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w37900t0-304e8.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-65-46.us-west-2.compute.internal?timeout=10s - read tcp 10.0.65.46:44368->10.0.115.162:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.20-upgrade-from-stable-4.19-e2e-metal-ipi-ovn-upgrade (all) - 6 runs, 0% failed, 83% of runs match
#2046166013233336320junit7 hours ago
2026-04-20T12:54:47Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-20T12:54:50Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:60974->192.168.111.5:6443: read: connection reset by peer
2026-04-20T12:54:53Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:42122->192.168.111.5:6443: read: connection reset by peer
2026-04-20T12:54:56Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:41088->192.168.111.5:6443: read: connection reset by peer
2026-04-20T12:54:59Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:46316->192.168.111.5:6443: read: connection reset by peer
#2046166013233336320junit7 hours ago
2026-04-20T12:54:47Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-20T12:54:50Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:60974->192.168.111.5:6443: read: connection reset by peer
2026-04-20T12:54:53Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:42122->192.168.111.5:6443: read: connection reset by peer
2026-04-20T12:54:56Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:41088->192.168.111.5:6443: read: connection reset by peer
2026-04-20T12:54:59Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:46316->192.168.111.5:6443: read: connection reset by peer
#2046033523214651392junit16 hours ago
2026-04-20T03:51:11Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - write tcp 192.168.111.21:54556->192.168.111.5:6443: write: broken pipe
2026-04-20T03:51:12Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:40798->192.168.111.5:6443: read: connection reset by peer
2026-04-20T03:51:13Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:43196->192.168.111.5:6443: read: connection reset by peer
2026-04-20T03:51:15Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:39640->192.168.111.5:6443: read: connection reset by peer
#2046033523214651392junit16 hours ago
2026-04-20T03:51:11Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - write tcp 192.168.111.21:54556->192.168.111.5:6443: write: broken pipe
2026-04-20T03:51:12Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:40798->192.168.111.5:6443: read: connection reset by peer
2026-04-20T03:51:13Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:43196->192.168.111.5:6443: read: connection reset by peer
2026-04-20T03:51:15Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:39640->192.168.111.5:6443: read: connection reset by peer
#2045949643161866240junit21 hours ago
2026-04-19T22:23:35Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-19T22:23:45Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:43354->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:23:45Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:44886->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:23:46Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - write tcp 192.168.111.20:38560->192.168.111.5:6443: write: connection reset by peer
2026-04-19T22:23:46Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:48188->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:23:51Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:32824->192.168.111.5:6443: read: connection reset by peer
#2045949643161866240junit21 hours ago
2026-04-19T22:23:35Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-19T22:23:45Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:43354->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:23:45Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:44886->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:23:46Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - write tcp 192.168.111.20:38560->192.168.111.5:6443: write: connection reset by peer
#2045811492304982016junit31 hours ago
2026-04-19T13:06:08Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-19T13:06:16Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:45484->192.168.111.5:6443: read: connection reset by peer
2026-04-19T13:06:17Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - write tcp 192.168.111.21:37548->192.168.111.5:6443: write: connection reset by peer
2026-04-19T13:06:18Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:52878->192.168.111.5:6443: read: connection reset by peer
#2045811492304982016junit31 hours ago
2026-04-19T13:06:08Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-19T13:06:16Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:45484->192.168.111.5:6443: read: connection reset by peer
2026-04-19T13:06:17Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - write tcp 192.168.111.21:37548->192.168.111.5:6443: write: connection reset by peer
2026-04-19T13:06:18Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:52878->192.168.111.5:6443: read: connection reset by peer
#2045679458555269120junit39 hours ago
2026-04-19T04:50:26Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-19T04:50:32Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:58936->192.168.111.5:6443: read: connection reset by peer
2026-04-19T04:50:32Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:50802->192.168.111.5:6443: read: connection reset by peer
2026-04-19T04:50:33Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:37094->192.168.111.5:6443: read: connection reset by peer
#2045679458555269120junit39 hours ago
namespace/openshift-machine-api node/master-1 pod/metal3-baremetal-operator-777567c99d-9hfqd hmsg/264b463d2d - never deleted - network rollout - firstTimestamp/2026-04-19T04:50:41Z interesting/true lastTimestamp/2026-04-19T04:50:41Z reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_metal3-baremetal-operator-777567c99d-9hfqd_openshift-machine-api_59e987d4-25ca-48da-8d1a-30bae87c84f2_0(be4d744b683af055d9b77cc9aa0cd043be85472315ee22a2a3e0501eb2c27bec): error adding pod openshift-machine-api_metal3-baremetal-operator-777567c99d-9hfqd to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"be4d744b683af055d9b77cc9aa0cd043be85472315ee22a2a3e0501eb2c27bec" Netns:"/var/run/netns/2c76ba7f-0754-4047-8ab3-c20f3cad4774" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=metal3-baremetal-operator-777567c99d-9hfqd;K8S_POD_INFRA_CONTAINER_ID=be4d744b683af055d9b77cc9aa0cd043be85472315ee22a2a3e0501eb2c27bec;K8S_POD_UID=59e987d4-25ca-48da-8d1a-30bae87c84f2" Path:"" ERRORED: error configuring pod [openshift-machine-api/metal3-baremetal-operator-777567c99d-9hfqd] networking: Multus: [openshift-machine-api/metal3-baremetal-operator-777567c99d-9hfqd/59e987d4-25ca-48da-8d1a-30bae87c84f2]: error waiting for pod: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: read tcp 192.168.111.21:43422->192.168.111.5:6443: read: connection reset by peer
': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
#2045679458555269120junit39 hours ago
2026-04-19T04:50:26Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-19T04:50:32Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:58936->192.168.111.5:6443: read: connection reset by peer
2026-04-19T04:50:32Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:50802->192.168.111.5:6443: read: connection reset by peer
2026-04-19T04:50:33Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:37094->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-5.0-e2e-metal-ovn-two-node-fencing-recovery (all) - 4 runs, 0% failed, 100% of runs match
#2046174279208275968junit7 hours ago
2026-04-20T11:57:57Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T12:02:50Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:52896->192.168.111.5:6443: read: connection reset by peer
2026-04-20T12:03:00Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046174279208275968junit7 hours ago
2026-04-20T12:27:27Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - context deadline exceeded
2026-04-20T12:39:52Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:39132->192.168.111.5:6443: read: connection reset by peer
2026-04-20T12:40:09Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045960392454180864junit21 hours ago
2026-04-19T22:05:20Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2026-04-19T22:15:16Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:48124->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:25:33Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - context deadline exceeded
#2045960392454180864junit21 hours ago
2026-04-19T22:43:30Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T22:58:58Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:55800->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:59:08Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - context deadline exceeded
#2045809420255891456junit31 hours ago
2026-04-19T13:16:57Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - context deadline exceeded
2026-04-19T13:46:41Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:41616->192.168.111.5:6443: read: connection reset by peer
#2045598000704655360junit45 hours ago
2026-04-18T21:51:48Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-18T21:57:01Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:39324->192.168.111.5:6443: read: connection reset by peer
2026-04-18T21:57:11Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-multiarch-main-nightly-4.20-ocp-e2e-upgrade-aws-ovn-multi-a-a (all) - 6 runs, 17% failed, 600% of failures match = 100% impact
#2046174107254394880junit7 hours ago
2026-04-20T11:43:40Z node/ip-10-0-93-193.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5kzzlrjv-eeeae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-93-193.ec2.internal?timeout=10s - read tcp 10.0.93.193:39948->10.0.2.47:6443: read: connection reset by peer
2026-04-20T11:47:51Z node/ip-10-0-37-0.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5kzzlrjv-eeeae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-37-0.ec2.internal?timeout=10s - read tcp 10.0.37.0:38466->10.0.113.171:6443: read: connection reset by peer
2026-04-20T11:48:01Z node/ip-10-0-37-0.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5kzzlrjv-eeeae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-37-0.ec2.internal?timeout=10s - read tcp 10.0.37.0:38822->10.0.113.171:6443: read: connection reset by peer
2026-04-20T11:48:23Z node/ip-10-0-31-148.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5kzzlrjv-eeeae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-31-148.ec2.internal?timeout=10s - read tcp 10.0.31.148:42456->10.0.113.171:6443: read: connection reset by peer
2026-04-20T12:37:09Z node/ip-10-0-85-143.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5kzzlrjv-eeeae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-85-143.ec2.internal?timeout=10s - read tcp 10.0.85.143:55560->10.0.2.47:6443: read: connection reset by peer
2026-04-20T12:37:15Z node/ip-10-0-93-193.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5kzzlrjv-eeeae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-93-193.ec2.internal?timeout=10s - read tcp 10.0.93.193:49224->10.0.113.171:6443: read: connection reset by peer
#2046041655504539648junit16 hours ago
2026-04-20T02:54:51Z node/ip-10-0-109-186.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-nk2jxvik-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-186.us-west-2.compute.internal?timeout=10s - read tcp 10.0.109.186:45334->10.0.7.30:6443: read: connection reset by peer
2026-04-20T02:54:58Z node/ip-10-0-24-33.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-nk2jxvik-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-33.us-west-2.compute.internal?timeout=10s - read tcp 10.0.24.33:38642->10.0.7.30:6443: read: connection reset by peer
2026-04-20T03:14:07Z node/ip-10-0-71-58.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-nk2jxvik-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-58.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T03:17:46Z node/ip-10-0-24-33.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-nk2jxvik-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-33.us-west-2.compute.internal?timeout=10s - read tcp 10.0.24.33:55282->10.0.7.30:6443: read: connection reset by peer
#2046041655504539648junit16 hours ago
2026-04-20T02:29:58Z node/ip-10-0-24-33.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-nk2jxvik-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-33.us-west-2.compute.internal?timeout=10s - read tcp 10.0.24.33:43416->10.0.85.163:6443: read: connection reset by peer
2026-04-20T02:30:01Z node/ip-10-0-61-109.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-nk2jxvik-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-109.us-west-2.compute.internal?timeout=10s - read tcp 10.0.61.109:32952->10.0.7.30:6443: read: connection reset by peer
2026-04-20T02:30:01Z node/ip-10-0-85-6.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-nk2jxvik-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-85-6.us-west-2.compute.internal?timeout=10s - read tcp 10.0.85.6:56788->10.0.85.163:6443: read: connection reset by peer
2026-04-20T02:54:51Z node/ip-10-0-109-186.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-nk2jxvik-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-186.us-west-2.compute.internal?timeout=10s - read tcp 10.0.109.186:45334->10.0.7.30:6443: read: connection reset by peer
2026-04-20T02:54:58Z node/ip-10-0-24-33.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-nk2jxvik-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-33.us-west-2.compute.internal?timeout=10s - read tcp 10.0.24.33:38642->10.0.7.30:6443: read: connection reset by peer
2026-04-20T03:14:07Z node/ip-10-0-71-58.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-nk2jxvik-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-58.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045956855582167040junit22 hours ago
2026-04-19T21:32:20Z node/ip-10-0-118-218.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w14gqg2i-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-118-218.us-east-2.compute.internal?timeout=10s - read tcp 10.0.118.218:42398->10.0.113.86:6443: read: connection reset by peer
2026-04-19T21:47:40Z node/ip-10-0-11-196.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w14gqg2i-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-196.us-east-2.compute.internal?timeout=10s - read tcp 10.0.11.196:54112->10.0.113.86:6443: read: connection reset by peer
2026-04-19T21:55:04Z node/ip-10-0-35-153.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w14gqg2i-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-153.us-east-2.compute.internal?timeout=10s - read tcp 10.0.35.153:47750->10.0.113.86:6443: read: connection reset by peer
#2045956855582167040junit22 hours ago
2026-04-19T21:06:08Z node/ip-10-0-11-196.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w14gqg2i-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-196.us-east-2.compute.internal?timeout=10s - read tcp 10.0.11.196:53416->10.0.113.86:6443: read: connection reset by peer
2026-04-19T21:06:18Z node/ip-10-0-91-137.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w14gqg2i-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-91-137.us-east-2.compute.internal?timeout=10s - read tcp 10.0.91.137:33410->10.0.113.86:6443: read: connection reset by peer
2026-04-19T21:32:20Z node/ip-10-0-118-218.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w14gqg2i-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-118-218.us-east-2.compute.internal?timeout=10s - read tcp 10.0.118.218:42398->10.0.113.86:6443: read: connection reset by peer
2026-04-19T21:47:40Z node/ip-10-0-11-196.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w14gqg2i-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-196.us-east-2.compute.internal?timeout=10s - read tcp 10.0.11.196:54112->10.0.113.86:6443: read: connection reset by peer
2026-04-19T21:55:04Z node/ip-10-0-35-153.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w14gqg2i-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-153.us-east-2.compute.internal?timeout=10s - read tcp 10.0.35.153:47750->10.0.113.86:6443: read: connection reset by peer
#2045822643931713536junit31 hours ago
2026-04-19T12:33:12Z node/ip-10-0-13-236.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l2sc44x2-eeeae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-236.us-east-2.compute.internal?timeout=10s - read tcp 10.0.13.236:56710->10.0.81.204:6443: read: connection reset by peer
2026-04-19T12:53:00Z node/ip-10-0-13-236.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l2sc44x2-eeeae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-236.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045822643931713536junit31 hours ago
2026-04-19T12:33:12Z node/ip-10-0-13-236.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l2sc44x2-eeeae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-236.us-east-2.compute.internal?timeout=10s - read tcp 10.0.13.236:56710->10.0.81.204:6443: read: connection reset by peer
2026-04-19T12:53:00Z node/ip-10-0-13-236.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l2sc44x2-eeeae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-236.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045686177628950528junit40 hours ago
2026-04-19T03:17:12Z node/ip-10-0-83-242.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fi49qss0-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-83-242.ec2.internal?timeout=10s - read tcp 10.0.83.242:51186->10.0.64.207:6443: read: connection reset by peer
2026-04-19T03:21:11Z node/ip-10-0-61-24.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fi49qss0-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-24.ec2.internal?timeout=10s - read tcp 10.0.61.24:54288->10.0.38.191:6443: read: connection reset by peer
2026-04-19T03:25:00Z node/ip-10-0-0-137.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fi49qss0-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-137.ec2.internal?timeout=10s - read tcp 10.0.0.137:42114->10.0.64.207:6443: read: connection reset by peer
2026-04-19T03:25:06Z node/ip-10-0-61-24.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fi49qss0-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-24.ec2.internal?timeout=10s - read tcp 10.0.61.24:49884->10.0.38.191:6443: read: connection reset by peer
2026-04-19T03:34:51Z node/ip-10-0-83-242.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fi49qss0-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-83-242.ec2.internal?timeout=10s - read tcp 10.0.83.242:42406->10.0.64.207:6443: read: connection reset by peer
2026-04-19T03:41:43Z node/ip-10-0-1-105.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fi49qss0-eeeae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-1-105.ec2.internal?timeout=10s - read tcp 10.0.1.105:35448->10.0.64.207:6443: read: connection reset by peer
#2045605719863988224junit45 hours ago
2026-04-18T22:41:30Z node/ip-10-0-59-88.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-il3iwn23-eeeae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-59-88.us-west-1.compute.internal?timeout=10s - read tcp 10.0.59.88:39718->10.0.37.7:6443: read: connection reset by peer
#2045605719863988224junit45 hours ago
2026-04-18T22:41:30Z node/ip-10-0-59-88.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-il3iwn23-eeeae.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-59-88.us-west-1.compute.internal?timeout=10s - read tcp 10.0.59.88:39718->10.0.37.7:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.20-e2e-aws-ovn-upgrade-fips (all) - 6 runs, 0% failed, 100% of runs match
#2046164775125127168junit7 hours ago
2026-04-20T10:39:04Z node/ip-10-0-38-182.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4d960nzt-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-38-182.us-west-1.compute.internal?timeout=10s - read tcp 10.0.38.182:41976->10.0.111.231:6443: read: connection reset by peer
2026-04-20T11:51:07Z node/ip-10-0-80-53.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4d960nzt-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-80-53.us-west-1.compute.internal?timeout=10s - context deadline exceeded
2026-04-20T11:54:01Z node/ip-10-0-51-246.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4d960nzt-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-246.us-west-1.compute.internal?timeout=10s - read tcp 10.0.51.246:59280->10.0.3.171:6443: read: connection reset by peer
#2046164775125127168junit7 hours ago
Apr 20 11:47:39.673 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io: failed to apply / update (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io: Patch "https://api-int.ci-op-4d960nzt-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/multus.openshift.io?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.80.53:59714->10.0.3.171:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 20 11:47:39.673 - 27s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io: failed to apply / update (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io: Patch "https://api-int.ci-op-4d960nzt-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/multus.openshift.io?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.80.53:59714->10.0.3.171:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 20 11:48:07.672 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2046164775125127168junit7 hours ago
2026-04-20T10:39:04Z node/ip-10-0-38-182.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4d960nzt-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-38-182.us-west-1.compute.internal?timeout=10s - read tcp 10.0.38.182:41976->10.0.111.231:6443: read: connection reset by peer
2026-04-20T11:51:07Z node/ip-10-0-80-53.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4d960nzt-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-80-53.us-west-1.compute.internal?timeout=10s - context deadline exceeded
2026-04-20T11:54:01Z node/ip-10-0-51-246.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4d960nzt-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-246.us-west-1.compute.internal?timeout=10s - read tcp 10.0.51.246:59280->10.0.3.171:6443: read: connection reset by peer
#2046033543347310592junit16 hours ago
2026-04-20T02:29:29Z node/ip-10-0-21-211.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-gwyj3v0h-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-21-211.us-west-2.compute.internal?timeout=10s - read tcp 10.0.21.211:34668->10.0.91.56:6443: read: connection reset by peer
2026-04-20T02:39:10Z node/ip-10-0-93-5.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-gwyj3v0h-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-93-5.us-west-2.compute.internal?timeout=10s - context deadline exceeded
#2046033543347310592junit16 hours ago
2026-04-20T02:29:29Z node/ip-10-0-21-211.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-gwyj3v0h-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-21-211.us-west-2.compute.internal?timeout=10s - read tcp 10.0.21.211:34668->10.0.91.56:6443: read: connection reset by peer
2026-04-20T02:39:10Z node/ip-10-0-93-5.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-gwyj3v0h-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-93-5.us-west-2.compute.internal?timeout=10s - context deadline exceeded
#2045949658269749248junit22 hours ago
2026-04-19T20:56:30Z node/ip-10-0-9-20.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x0s7yzqb-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-20.us-west-2.compute.internal?timeout=10s - read tcp 10.0.9.20:49494->10.0.31.144:6443: read: connection reset by peer
2026-04-19T21:00:40Z node/ip-10-0-121-96.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x0s7yzqb-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-121-96.us-west-2.compute.internal?timeout=10s - read tcp 10.0.121.96:60334->10.0.31.144:6443: read: connection reset by peer
2026-04-19T21:04:30Z node/ip-10-0-9-20.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x0s7yzqb-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-20.us-west-2.compute.internal?timeout=10s - read tcp 10.0.9.20:57082->10.0.31.144:6443: read: connection reset by peer
2026-04-19T21:04:41Z node/ip-10-0-125-227.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x0s7yzqb-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-125-227.us-west-2.compute.internal?timeout=10s - read tcp 10.0.125.227:43908->10.0.31.144:6443: read: connection reset by peer
2026-04-19T21:23:20Z node/ip-10-0-103-77.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x0s7yzqb-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-77.us-west-2.compute.internal?timeout=10s - read tcp 10.0.103.77:60432->10.0.88.162:6443: read: connection reset by peer
2026-04-19T21:33:55Z node/ip-10-0-9-20.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-x0s7yzqb-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-20.us-west-2.compute.internal?timeout=10s - read tcp 10.0.9.20:54084->10.0.88.162:6443: read: connection reset by peer
#2045811516627750912junit32 hours ago
2026-04-19T11:27:15Z node/ip-10-0-14-42.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-h861sp88-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-14-42.ec2.internal?timeout=10s - read tcp 10.0.14.42:36326->10.0.101.58:6443: read: connection reset by peer
2026-04-19T11:27:21Z node/ip-10-0-102-2.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-h861sp88-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-102-2.ec2.internal?timeout=10s - read tcp 10.0.102.2:44526->10.0.33.136:6443: read: connection reset by peer
2026-04-19T11:35:25Z node/ip-10-0-58-159.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-h861sp88-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-159.ec2.internal?timeout=10s - read tcp 10.0.58.159:41364->10.0.101.58:6443: read: connection reset by peer
2026-04-19T11:43:35Z node/ip-10-0-58-159.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-h861sp88-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-159.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T12:02:17Z node/ip-10-0-14-42.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-h861sp88-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-14-42.ec2.internal?timeout=10s - read tcp 10.0.14.42:45846->10.0.101.58:6443: read: connection reset by peer
2026-04-19T12:02:32Z node/ip-10-0-102-2.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-h861sp88-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-102-2.ec2.internal?timeout=10s - read tcp 10.0.102.2:42262->10.0.101.58:6443: read: connection reset by peer
#2045811516627750912junit32 hours ago
2026-04-19T11:27:15Z node/ip-10-0-14-42.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-h861sp88-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-14-42.ec2.internal?timeout=10s - read tcp 10.0.14.42:36326->10.0.101.58:6443: read: connection reset by peer
2026-04-19T11:27:21Z node/ip-10-0-102-2.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-h861sp88-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-102-2.ec2.internal?timeout=10s - read tcp 10.0.102.2:44526->10.0.33.136:6443: read: connection reset by peer
2026-04-19T11:35:25Z node/ip-10-0-58-159.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-h861sp88-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-159.ec2.internal?timeout=10s - read tcp 10.0.58.159:41364->10.0.101.58:6443: read: connection reset by peer
2026-04-19T11:43:35Z node/ip-10-0-58-159.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-h861sp88-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-159.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045679508060639232junit40 hours ago
2026-04-19T02:52:43Z node/ip-10-0-123-112.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-b73hyk7p-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-123-112.us-west-1.compute.internal?timeout=10s - read tcp 10.0.123.112:50672->10.0.81.140:6443: read: connection reset by peer
2026-04-19T02:52:49Z node/ip-10-0-7-136.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-b73hyk7p-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-7-136.us-west-1.compute.internal?timeout=10s - read tcp 10.0.7.136:59114->10.0.81.140:6443: read: connection reset by peer
2026-04-19T02:56:57Z node/ip-10-0-93-251.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-b73hyk7p-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-93-251.us-west-1.compute.internal?timeout=10s - read tcp 10.0.93.251:36290->10.0.5.145:6443: read: connection reset by peer
2026-04-19T02:56:59Z node/ip-10-0-123-112.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-b73hyk7p-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-123-112.us-west-1.compute.internal?timeout=10s - read tcp 10.0.123.112:52618->10.0.5.145:6443: read: connection reset by peer
2026-04-19T03:07:22Z node/ip-10-0-123-112.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-b73hyk7p-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-123-112.us-west-1.compute.internal?timeout=10s - read tcp 10.0.123.112:34408->10.0.81.140:6443: read: connection reset by peer
2026-04-19T03:22:10Z node/ip-10-0-123-112.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-b73hyk7p-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-123-112.us-west-1.compute.internal?timeout=10s - read tcp 10.0.123.112:36322->10.0.5.145:6443: read: connection reset by peer
#2045600202143830016junit45 hours ago
2026-04-18T21:31:05Z node/ip-10-0-82-140.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ptdygnzp-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-82-140.us-west-2.compute.internal?timeout=10s - read tcp 10.0.82.140:43800->10.0.90.76:6443: read: connection reset by peer
2026-04-18T21:34:53Z node/ip-10-0-14-250.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ptdygnzp-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-14-250.us-west-2.compute.internal?timeout=10s - read tcp 10.0.14.250:38890->10.0.32.10:6443: read: connection reset by peer
2026-04-18T21:39:25Z node/ip-10-0-65-122.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ptdygnzp-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-65-122.us-west-2.compute.internal?timeout=10s - read tcp 10.0.65.122:43750->10.0.90.76:6443: read: connection reset by peer
#2045600202143830016junit45 hours ago
2026-04-18T21:31:05Z node/ip-10-0-82-140.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ptdygnzp-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-82-140.us-west-2.compute.internal?timeout=10s - read tcp 10.0.82.140:43800->10.0.90.76:6443: read: connection reset by peer
2026-04-18T21:34:53Z node/ip-10-0-14-250.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ptdygnzp-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-14-250.us-west-2.compute.internal?timeout=10s - read tcp 10.0.14.250:38890->10.0.32.10:6443: read: connection reset by peer
2026-04-18T21:39:25Z node/ip-10-0-65-122.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ptdygnzp-95e4a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-65-122.us-west-2.compute.internal?timeout=10s - read tcp 10.0.65.122:43750->10.0.90.76:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.20-e2e-aws-ovn-upgrade-out-of-change (all) - 6 runs, 0% failed, 100% of runs match
#2046165991418761216junit7 hours ago
2026-04-20T10:44:39Z node/ip-10-0-23-242.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fxhif42v-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-242.ec2.internal?timeout=10s - read tcp 10.0.23.242:60958->10.0.46.230:6443: read: connection reset by peer
2026-04-20T10:45:08Z node/ip-10-0-118-206.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fxhif42v-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-118-206.ec2.internal?timeout=10s - read tcp 10.0.118.206:35110->10.0.75.140:6443: read: connection reset by peer
2026-04-20T11:04:20Z node/ip-10-0-78-19.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fxhif42v-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-78-19.ec2.internal?timeout=10s - read tcp 10.0.78.19:39094->10.0.75.140:6443: read: connection reset by peer
2026-04-20T11:12:00Z node/ip-10-0-90-41.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fxhif42v-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-41.ec2.internal?timeout=10s - read tcp 10.0.90.41:46518->10.0.75.140:6443: read: connection reset by peer
2026-04-20T11:12:22Z node/ip-10-0-52-37.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fxhif42v-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-37.ec2.internal?timeout=10s - read tcp 10.0.52.37:41430->10.0.75.140:6443: read: connection reset by peer
2026-04-20T11:53:54Z node/ip-10-0-23-242.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fxhif42v-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-242.ec2.internal?timeout=10s - read tcp 10.0.23.242:37260->10.0.46.230:6443: read: connection reset by peer
#2046033467833061376junit16 hours ago
2026-04-20T02:25:39Z node/ip-10-0-119-63.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l2j2idgj-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-119-63.us-west-2.compute.internal?timeout=10s - read tcp 10.0.119.63:53058->10.0.37.40:6443: read: connection reset by peer
2026-04-20T02:29:34Z node/ip-10-0-127-41.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l2j2idgj-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-41.us-west-2.compute.internal?timeout=10s - read tcp 10.0.127.41:56272->10.0.37.40:6443: read: connection reset by peer
2026-04-20T02:29:53Z node/ip-10-0-19-56.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l2j2idgj-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-19-56.us-west-2.compute.internal?timeout=10s - read tcp 10.0.19.56:40196->10.0.122.44:6443: read: connection reset by peer
2026-04-20T02:29:57Z node/ip-10-0-72-19.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l2j2idgj-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-72-19.us-west-2.compute.internal?timeout=10s - read tcp 10.0.72.19:60226->10.0.122.44:6443: read: connection reset by peer
2026-04-20T02:33:21Z node/ip-10-0-20-19.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l2j2idgj-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-20-19.us-west-2.compute.internal?timeout=10s - read tcp 10.0.20.19:49092->10.0.122.44:6443: read: connection reset by peer
2026-04-20T02:34:00Z node/ip-10-0-119-63.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l2j2idgj-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-119-63.us-west-2.compute.internal?timeout=10s - read tcp 10.0.119.63:49418->10.0.122.44:6443: read: connection reset by peer
2026-04-20T02:46:45Z node/ip-10-0-127-41.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l2j2idgj-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-41.us-west-2.compute.internal?timeout=10s - read tcp 10.0.127.41:46382->10.0.37.40:6443: read: connection reset by peer
2026-04-20T02:54:33Z node/ip-10-0-20-19.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l2j2idgj-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-20-19.us-west-2.compute.internal?timeout=10s - read tcp 10.0.20.19:52816->10.0.122.44:6443: read: connection reset by peer
2026-04-20T03:00:36Z node/ip-10-0-19-56.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-l2j2idgj-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-19-56.us-west-2.compute.internal?timeout=10s - read tcp 10.0.19.56:39088->10.0.122.44:6443: read: connection reset by peer
#2045949644877336576junit22 hours ago
2026-04-19T20:51:41Z node/ip-10-0-58-85.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t08clq7k-ff06f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-85.us-east-2.compute.internal?timeout=10s - read tcp 10.0.58.85:42834->10.0.87.215:6443: read: connection reset by peer
2026-04-19T20:59:59Z node/ip-10-0-78-194.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t08clq7k-ff06f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-78-194.us-east-2.compute.internal?timeout=10s - read tcp 10.0.78.194:34276->10.0.87.215:6443: read: connection reset by peer
2026-04-19T21:10:50Z node/ip-10-0-36-130.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t08clq7k-ff06f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-130.us-east-2.compute.internal?timeout=10s - context deadline exceeded
#2045949644877336576junit22 hours ago
2026-04-19T21:11:05Z node/ip-10-0-105-243.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t08clq7k-ff06f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-105-243.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T21:20:40Z node/ip-10-0-105-243.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t08clq7k-ff06f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-105-243.us-east-2.compute.internal?timeout=10s - read tcp 10.0.105.243:50636->10.0.54.242:6443: read: connection reset by peer
2026-04-19T21:27:36Z node/ip-10-0-78-194.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-t08clq7k-ff06f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-78-194.us-east-2.compute.internal?timeout=10s - read tcp 10.0.78.194:59578->10.0.87.215:6443: read: connection reset by peer
#2045949644877336576junit22 hours ago
Apr 19 20:51:49.781 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools: failed to apply / update (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools: Patch "https://api-int.ci-op-t08clq7k-ff06f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ancillary-tools?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.105.243:50028->10.0.54.242:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 19 20:51:49.781 - 28s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools: failed to apply / update (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools: Patch "https://api-int.ci-op-t08clq7k-ff06f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ancillary-tools?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.105.243:50028->10.0.54.242:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 19 20:52:17.790 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2045811498164424704junit32 hours ago
2026-04-19T11:30:47Z node/ip-10-0-18-27.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-i1g51kbb-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-18-27.ec2.internal?timeout=10s - read tcp 10.0.18.27:59374->10.0.57.195:6443: read: connection reset by peer
2026-04-19T11:52:17Z node/ip-10-0-99-242.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-i1g51kbb-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-99-242.ec2.internal?timeout=10s - context deadline exceeded
2026-04-19T11:56:14Z node/ip-10-0-68-126.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-i1g51kbb-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-68-126.ec2.internal?timeout=10s - read tcp 10.0.68.126:50796->10.0.57.195:6443: read: connection reset by peer
2026-04-19T12:03:26Z node/ip-10-0-105-72.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-i1g51kbb-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-105-72.ec2.internal?timeout=10s - read tcp 10.0.105.72:48536->10.0.74.204:6443: read: connection reset by peer
#2045811498164424704junit32 hours ago
2026-04-19T11:30:47Z node/ip-10-0-18-27.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-i1g51kbb-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-18-27.ec2.internal?timeout=10s - read tcp 10.0.18.27:59374->10.0.57.195:6443: read: connection reset by peer
2026-04-19T11:52:17Z node/ip-10-0-99-242.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-i1g51kbb-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-99-242.ec2.internal?timeout=10s - context deadline exceeded
2026-04-19T11:56:14Z node/ip-10-0-68-126.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-i1g51kbb-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-68-126.ec2.internal?timeout=10s - read tcp 10.0.68.126:50796->10.0.57.195:6443: read: connection reset by peer
2026-04-19T12:03:26Z node/ip-10-0-105-72.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-i1g51kbb-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-105-72.ec2.internal?timeout=10s - read tcp 10.0.105.72:48536->10.0.74.204:6443: read: connection reset by peer
#2045679462896373760junit40 hours ago
2026-04-19T02:42:23Z node/ip-10-0-83-3.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-z22jpgtf-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-83-3.ec2.internal?timeout=10s - read tcp 10.0.83.3:50182->10.0.127.82:6443: read: connection reset by peer
2026-04-19T02:46:20Z node/ip-10-0-59-20.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-z22jpgtf-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-59-20.ec2.internal?timeout=10s - read tcp 10.0.59.20:43422->10.0.45.102:6443: read: connection reset by peer
2026-04-19T02:46:23Z node/ip-10-0-22-1.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-z22jpgtf-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-1.ec2.internal?timeout=10s - read tcp 10.0.22.1:47846->10.0.45.102:6443: read: connection reset by peer
2026-04-19T03:03:16Z node/ip-10-0-100-124.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-z22jpgtf-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-124.ec2.internal?timeout=10s - read tcp 10.0.100.124:54582->10.0.127.82:6443: read: connection reset by peer
2026-04-19T03:10:42Z node/ip-10-0-100-124.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-z22jpgtf-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-124.ec2.internal?timeout=10s - read tcp 10.0.100.124:60440->10.0.45.102:6443: read: connection reset by peer
2026-04-19T03:13:49Z node/ip-10-0-59-20.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-z22jpgtf-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-59-20.ec2.internal?timeout=10s - context deadline exceeded
2026-04-19T03:17:41Z node/ip-10-0-24-145.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-z22jpgtf-ff06f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-145.ec2.internal?timeout=10s - read tcp 10.0.24.145:37268->10.0.45.102:6443: read: connection reset by peer
#2045600252488060928junit45 hours ago
2026-04-18T21:36:38Z node/ip-10-0-39-119.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-kv5ibm9s-ff06f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-119.ec2.internal?timeout=10s - read tcp 10.0.39.119:60950->10.0.41.28:6443: read: connection reset by peer
2026-04-18T21:36:39Z node/ip-10-0-47-243.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-kv5ibm9s-ff06f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-47-243.ec2.internal?timeout=10s - read tcp 10.0.47.243:43594->10.0.41.28:6443: read: connection reset by peer
2026-04-18T21:37:02Z node/ip-10-0-29-46.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-kv5ibm9s-ff06f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-29-46.ec2.internal?timeout=10s - read tcp 10.0.29.46:41040->10.0.113.223:6443: read: connection reset by peer
2026-04-18T21:40:35Z node/ip-10-0-39-113.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-kv5ibm9s-ff06f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-113.ec2.internal?timeout=10s - read tcp 10.0.39.113:35364->10.0.113.223:6443: read: connection reset by peer
2026-04-18T21:44:26Z node/ip-10-0-102-30.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-kv5ibm9s-ff06f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-102-30.ec2.internal?timeout=10s - read tcp 10.0.102.30:45288->10.0.113.223:6443: read: connection reset by peer
2026-04-18T21:44:30Z node/ip-10-0-39-113.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-kv5ibm9s-ff06f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-113.ec2.internal?timeout=10s - read tcp 10.0.39.113:43986->10.0.41.28:6443: read: connection reset by peer
2026-04-18T21:44:33Z node/ip-10-0-29-46.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-kv5ibm9s-ff06f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-29-46.ec2.internal?timeout=10s - read tcp 10.0.29.46:41636->10.0.41.28:6443: read: connection reset by peer
2026-04-18T22:12:08Z node/ip-10-0-102-30.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-kv5ibm9s-ff06f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-102-30.ec2.internal?timeout=10s - read tcp 10.0.102.30:52376->10.0.41.28:6443: read: connection reset by peer
periodic-ci-openshift-cluster-authentication-operator-release-4.21-periodics-e2e-gcp-external-oidc-uid-extra (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#2046187493363027968junit8 hours ago
2026-04-20T13:09:28Z node/ci-op-gxwtdq58-d42e1-4d889-worker-f-snn4p - reason/FailedToUpdateLease https://api-int.ci-op-gxwtdq58-d42e1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-gxwtdq58-d42e1-4d889-worker-f-snn4p?timeout=10s - read tcp 10.0.128.2:35748->10.0.0.2:6443: read: connection reset by peer
periodic-ci-openshift-multiarch-main-nightly-4.20-ocp-e2e-aws-ovn-multi-x-x (all) - 7 runs, 0% failed, 57% of runs match
#2046174099675287552junit8 hours ago
2026-04-20T11:25:51Z node/ip-10-0-12-199.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-nwm0lpbw-cb4fb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-12-199.us-west-2.compute.internal?timeout=10s - read tcp 10.0.12.199:52958->10.0.18.140:6443: read: connection reset by peer
2026-04-20T11:25:54Z node/ip-10-0-115-130.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-nwm0lpbw-cb4fb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-115-130.us-west-2.compute.internal?timeout=10s - read tcp 10.0.115.130:34024->10.0.18.140:6443: read: connection reset by peer
#2045956843863281664junit23 hours ago
2026-04-19T21:07:13Z node/ip-10-0-70-93.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-g22555dq-cb4fb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-70-93.ec2.internal?timeout=10s - read tcp 10.0.70.93:34612->10.0.60.230:6443: read: connection reset by peer
2026-04-19T21:07:18Z node/ip-10-0-41-80.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-g22555dq-cb4fb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-41-80.ec2.internal?timeout=10s - read tcp 10.0.41.80:34054->10.0.99.139:6443: read: connection reset by peer
2026-04-19T21:11:09Z node/ip-10-0-16-168.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-g22555dq-cb4fb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-168.ec2.internal?timeout=10s - read tcp 10.0.16.168:58810->10.0.99.139:6443: read: connection reset by peer
2026-04-19T21:11:19Z node/ip-10-0-14-197.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-g22555dq-cb4fb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-14-197.ec2.internal?timeout=10s - read tcp 10.0.14.197:54394->10.0.99.139:6443: read: connection reset by peer
#2045822646435713024junit32 hours ago
2026-04-19T12:03:03Z node/ip-10-0-125-97.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ytgqjr37-cb4fb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-125-97.us-west-1.compute.internal?timeout=10s - read tcp 10.0.125.97:38908->10.0.109.172:6443: read: connection reset by peer
#2045605721558487040junit46 hours ago
2026-04-18T21:55:50Z node/ip-10-0-11-146.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zxprkyvy-cb4fb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-146.us-west-2.compute.internal?timeout=10s - read tcp 10.0.11.146:39894->10.0.112.192:6443: read: connection reset by peer
2026-04-18T21:55:51Z node/ip-10-0-52-30.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zxprkyvy-cb4fb.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-30.us-west-2.compute.internal?timeout=10s - read tcp 10.0.52.30:60722->10.0.112.192:6443: read: connection reset by peer
periodic-ci-openshift-cluster-authentication-operator-release-4.20-periodics-e2e-aws-external-oidc-configure-techpreview (all) - 2 runs, 0% failed, 100% of runs match
#2046159314392977408junit8 hours ago
2026-04-20T10:50:51Z node/ip-10-0-42-49.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pt905fys-99187.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-42-49.us-west-2.compute.internal?timeout=10s - read tcp 10.0.42.49:33842->10.0.7.201:6443: read: connection reset by peer
2026-04-20T10:55:03Z node/ip-10-0-80-230.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pt905fys-99187.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-80-230.us-west-2.compute.internal?timeout=10s - read tcp 10.0.80.230:43622->10.0.7.201:6443: read: connection reset by peer
2026-04-20T10:55:14Z node/ip-10-0-34-16.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pt905fys-99187.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-34-16.us-west-2.compute.internal?timeout=10s - read tcp 10.0.34.16:43134->10.0.7.201:6443: read: connection reset by peer
2026-04-20T10:58:52Z node/ip-10-0-62-229.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pt905fys-99187.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-229.us-west-2.compute.internal?timeout=10s - read tcp 10.0.62.229:45954->10.0.69.85:6443: read: connection reset by peer
2026-04-20T11:04:23Z node/ip-10-0-101-146.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pt905fys-99187.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-101-146.us-west-2.compute.internal?timeout=10s - read tcp 10.0.101.146:33774->10.0.7.201:6443: read: connection reset by peer
2026-04-20T11:04:28Z node/ip-10-0-42-49.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pt905fys-99187.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-42-49.us-west-2.compute.internal?timeout=10s - read tcp 10.0.42.49:44376->10.0.7.201:6443: read: connection reset by peer
2026-04-20T11:12:18Z node/ip-10-0-42-49.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pt905fys-99187.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-42-49.us-west-2.compute.internal?timeout=10s - read tcp 10.0.42.49:51808->10.0.69.85:6443: read: connection reset by peer
2026-04-20T11:12:22Z node/ip-10-0-34-16.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pt905fys-99187.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-34-16.us-west-2.compute.internal?timeout=10s - read tcp 10.0.34.16:33288->10.0.69.85:6443: read: connection reset by peer
2026-04-20T11:18:02Z node/ip-10-0-80-230.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pt905fys-99187.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-80-230.us-west-2.compute.internal?timeout=10s - read tcp 10.0.80.230:39788->10.0.7.201:6443: read: connection reset by peer
2026-04-20T11:18:06Z node/ip-10-0-62-229.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pt905fys-99187.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-229.us-west-2.compute.internal?timeout=10s - read tcp 10.0.62.229:56674->10.0.69.85:6443: read: connection reset by peer
2026-04-20T11:18:06Z node/ip-10-0-48-5.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pt905fys-99187.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-48-5.us-west-2.compute.internal?timeout=10s - read tcp 10.0.48.5:60744->10.0.7.201:6443: read: connection reset by peer
2026-04-20T11:26:00Z node/ip-10-0-101-146.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pt905fys-99187.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-101-146.us-west-2.compute.internal?timeout=10s - read tcp 10.0.101.146:37294->10.0.7.201:6443: read: connection reset by peer
2026-04-20T11:26:02Z node/ip-10-0-80-230.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pt905fys-99187.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-80-230.us-west-2.compute.internal?timeout=10s - read tcp 10.0.80.230:58504->10.0.69.85:6443: read: connection reset by peer
2026-04-20T11:26:16Z node/ip-10-0-42-49.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pt905fys-99187.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-42-49.us-west-2.compute.internal?timeout=10s - read tcp 10.0.42.49:37690->10.0.69.85:6443: read: connection reset by peer

... 42 lines not shown

#2045795805964537856junit32 hours ago
2026-04-19T10:23:20Z node/ip-10-0-25-108.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5cd7m5kb-99187.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-108.ec2.internal?timeout=10s - read tcp 10.0.25.108:37118->10.0.13.55:6443: read: connection reset by peer
2026-04-19T10:36:02Z node/ip-10-0-42-158.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5cd7m5kb-99187.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-42-158.ec2.internal?timeout=10s - read tcp 10.0.42.158:44924->10.0.73.225:6443: read: connection reset by peer
2026-04-19T10:36:04Z node/ip-10-0-53-161.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5cd7m5kb-99187.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-161.ec2.internal?timeout=10s - read tcp 10.0.53.161:44592->10.0.13.55:6443: read: connection reset by peer
2026-04-19T10:36:25Z node/ip-10-0-25-108.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5cd7m5kb-99187.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-108.ec2.internal?timeout=10s - read tcp 10.0.25.108:53234->10.0.73.225:6443: read: connection reset by peer
2026-04-19T10:43:59Z node/ip-10-0-103-111.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5cd7m5kb-99187.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-111.ec2.internal?timeout=10s - read tcp 10.0.103.111:33138->10.0.73.225:6443: read: connection reset by peer
2026-04-19T10:44:04Z node/ip-10-0-25-108.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5cd7m5kb-99187.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-108.ec2.internal?timeout=10s - read tcp 10.0.25.108:53292->10.0.73.225:6443: read: connection reset by peer
2026-04-19T10:49:53Z node/ip-10-0-100-161.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5cd7m5kb-99187.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-161.ec2.internal?timeout=10s - read tcp 10.0.100.161:50548->10.0.73.225:6443: read: connection reset by peer
2026-04-19T10:53:20Z node/ip-10-0-29-84.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5cd7m5kb-99187.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-29-84.ec2.internal?timeout=10s - read tcp 10.0.29.84:53380->10.0.73.225:6443: read: connection reset by peer
2026-04-19T10:53:25Z node/ip-10-0-25-108.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5cd7m5kb-99187.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-108.ec2.internal?timeout=10s - read tcp 10.0.25.108:48814->10.0.73.225:6443: read: connection reset by peer
2026-04-19T10:53:29Z node/ip-10-0-100-161.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5cd7m5kb-99187.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-100-161.ec2.internal?timeout=10s - read tcp 10.0.100.161:52548->10.0.13.55:6443: read: connection reset by peer
2026-04-19T10:53:30Z node/ip-10-0-29-84.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5cd7m5kb-99187.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-29-84.ec2.internal?timeout=10s - read tcp 10.0.29.84:53410->10.0.73.225:6443: read: connection reset by peer
2026-04-19T10:53:57Z node/ip-10-0-53-161.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5cd7m5kb-99187.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-161.ec2.internal?timeout=10s - read tcp 10.0.53.161:32890->10.0.13.55:6443: read: connection reset by peer
2026-04-19T10:57:21Z node/ip-10-0-53-161.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5cd7m5kb-99187.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-161.ec2.internal?timeout=10s - read tcp 10.0.53.161:51836->10.0.73.225:6443: read: connection reset by peer
2026-04-19T11:03:03Z node/ip-10-0-29-84.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5cd7m5kb-99187.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-29-84.ec2.internal?timeout=10s - read tcp 10.0.29.84:46890->10.0.13.55:6443: read: connection reset by peer

... 38 lines not shown

periodic-ci-openshift-release-main-nightly-5.0-e2e-metal-ipi-ovn-upgrade (all) - 8 runs, 0% failed, 75% of runs match
#2046142566990090240junit8 hours ago
2026-04-20T11:15:46Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:54164->192.168.111.5:6443: read: connection reset by peer
2026-04-20T11:15:47Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:57114->192.168.111.5:6443: read: connection reset by peer
2026-04-20T11:15:51Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:48270->192.168.111.5:6443: read: connection reset by peer
2026-04-20T11:15:52Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:32850->192.168.111.5:6443: read: connection reset by peer
2026-04-20T11:15:53Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:57578->192.168.111.5:6443: read: connection reset by peer
#2046142566990090240junit8 hours ago
2026-04-20T11:15:46Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:54164->192.168.111.5:6443: read: connection reset by peer
2026-04-20T11:15:47Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:57114->192.168.111.5:6443: read: connection reset by peer
2026-04-20T11:15:51Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:48270->192.168.111.5:6443: read: connection reset by peer
2026-04-20T11:15:52Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:32850->192.168.111.5:6443: read: connection reset by peer
2026-04-20T11:15:53Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:57578->192.168.111.5:6443: read: connection reset by peer
#2046041525611139072junit15 hours ago
2026-04-20T03:50:06Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - context deadline exceeded
2026-04-20T03:59:25Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:52318->192.168.111.5:6443: read: connection reset by peer
2026-04-20T03:59:27Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:44954->192.168.111.5:6443: read: connection reset by peer
2026-04-20T03:59:28Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:33606->192.168.111.5:6443: read: connection reset by peer
2026-04-20T04:05:26Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:56168->192.168.111.5:6443: read: connection reset by peer
2026-04-20T04:05:26Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:39260->192.168.111.5:6443: read: connection reset by peer
2026-04-20T04:05:34Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:49224->192.168.111.5:6443: read: connection reset by peer
#2045943054950469632junit22 hours ago
2026-04-19T21:30:52Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:41396->192.168.111.5:6443: read: connection reset by peer
2026-04-19T21:30:54Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:45756->192.168.111.5:6443: read: connection reset by peer
2026-04-19T21:31:01Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:34678->192.168.111.5:6443: read: connection reset by peer
2026-04-19T21:31:01Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:46394->192.168.111.5:6443: read: connection reset by peer
2026-04-19T21:33:47Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T21:37:04Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:44564->192.168.111.5:6443: read: connection reset by peer
2026-04-19T21:37:05Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:48766->192.168.111.5:6443: read: connection reset by peer
2026-04-19T21:37:10Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:42436->192.168.111.5:6443: read: connection reset by peer
2026-04-19T21:37:11Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:35332->192.168.111.5:6443: read: connection reset by peer
2026-04-19T21:37:13Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:37616->192.168.111.5:6443: read: connection reset by peer
#2045857747466981376junit27 hours ago
2026-04-19T14:30:47Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T16:01:03Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:36300->192.168.111.5:6443: read: connection reset by peer
2026-04-19T16:01:10Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:47978->192.168.111.5:6443: read: connection reset by peer
2026-04-19T16:01:10Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:59380->192.168.111.5:6443: read: connection reset by peer
#2045857747466981376junit27 hours ago
2026-04-19T14:30:47Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T16:01:03Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:36300->192.168.111.5:6443: read: connection reset by peer
2026-04-19T16:01:10Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:47978->192.168.111.5:6443: read: connection reset by peer
2026-04-19T16:01:10Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:59380->192.168.111.5:6443: read: connection reset by peer
#2045767015045533696junit33 hours ago
2026-04-19T09:56:57Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:49834->192.168.111.5:6443: read: connection reset by peer
2026-04-19T09:57:01Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:35248->192.168.111.5:6443: read: connection reset by peer
#2045767015045533696junit33 hours ago
2026-04-19T09:56:57Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:49834->192.168.111.5:6443: read: connection reset by peer
2026-04-19T09:57:01Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:35248->192.168.111.5:6443: read: connection reset by peer
#2045582008549117952junit46 hours ago
2026-04-18T20:05:57Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - context deadline exceeded
2026-04-18T21:41:12Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:41356->192.168.111.5:6443: read: connection reset by peer
2026-04-18T21:41:16Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:37330->192.168.111.5:6443: read: connection reset by peer
2026-04-18T21:41:17Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:54628->192.168.111.5:6443: read: connection reset by peer
2026-04-18T21:47:18Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:32796->192.168.111.5:6443: read: connection reset by peer
2026-04-18T21:47:22Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:47428->192.168.111.5:6443: read: connection reset by peer
2026-04-18T21:47:22Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:54034->192.168.111.5:6443: read: connection reset by peer
2026-04-18T21:47:23Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:59804->192.168.111.5:6443: read: connection reset by peer
2026-04-18T21:47:25Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:37586->192.168.111.5:6443: read: connection reset by peer
pull-ci-openshift-ovn-kubernetes-release-4.16-e2e-aws-live-migration-sdn-ovn (all) - 3 runs, 67% failed, 150% of failures match = 100% impact
#2046190238878928896junit8 hours ago
2026-04-20T12:39:18Z node/ip-10-0-53-247.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7g28314j-c29f6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-247.ec2.internal?timeout=10s - read tcp 10.0.53.247:49184->10.0.5.234:6443: read: connection reset by peer
2026-04-20T12:39:28Z node/ip-10-0-53-247.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7g28314j-c29f6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-247.ec2.internal?timeout=10s - read tcp 10.0.53.247:59658->10.0.5.234:6443: read: connection reset by peer
#2046149518180749312junit10 hours ago
2026-04-20T10:21:53Z node/ip-10-0-28-156.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7g28314j-c29f6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-28-156.ec2.internal?timeout=10s - read tcp 10.0.28.156:53620->10.0.39.43:6443: read: connection reset by peer
2026-04-20T10:22:33Z node/ip-10-0-14-105.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7g28314j-c29f6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-14-105.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046149518180749312junit10 hours ago
2026-04-20T10:23:01Z node/ip-10-0-39-77.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7g28314j-c29f6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-77.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T10:44:15Z node/ip-10-0-28-156.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7g28314j-c29f6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-28-156.ec2.internal?timeout=10s - read tcp 10.0.28.156:46896->10.0.39.43:6443: read: connection reset by peer
2026-04-20T10:49:04Z node/ip-10-0-38-249.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7g28314j-c29f6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-38-249.ec2.internal?timeout=10s - read tcp 10.0.38.249:47564->10.0.39.43:6443: read: connection reset by peer
2026-04-20T10:49:12Z node/ip-10-0-39-77.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7g28314j-c29f6.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-77.ec2.internal?timeout=10s - read tcp 10.0.39.77:46382->10.0.39.43:6443: read: connection reset by peer
#2046104902404411392junit13 hours ago
namespace/openshift-sdn node/ip-10-0-31-132.ec2.internal pod/sdn-controller-l7vjj uid/e729a99d-008f-4400-b762-5160214a555b container/sdn-controller restarted 1 times:
cause/Error code/2 reason/ContainerExit  1 request.go:1116] Unexpected error when reading response body: read tcp 10.0.31.132:54952->10.0.15.227:6443: read: connection reset by peer
E0420 06:48:06.051333       1 leaderelection.go:332] error retrieving resource lock openshift-sdn/openshift-network-controller: unexpected error when reading response body. Please retry. Original error: read tcp 10.0.31.132:54952->10.0.15.227:6443: read: connection reset by peer
I0420 06:48:07.250247       1 reflector.go:351] Caches populated for *v1.CloudPrivateIPConfig from k8s.io/client-go@v1.29.1/tools/cache/reflector.go:229
#2046104902404411392junit13 hours ago
I0420 06:39:21.503800       1 leaderelection.go:250] attempting to acquire leader lease openshift-sdn/openshift-network-controller...
E0420 06:48:06.030450       1 request.go:1116] Unexpected error when reading response body: read tcp 10.0.53.248:58418->10.0.15.227:6443: read: connection reset by peer
E0420 06:48:06.034999       1 leaderelection.go:332] error retrieving resource lock openshift-sdn/openshift-network-controller: unexpected error when reading response body. Please retry. Original error: read tcp 10.0.53.248:58418->10.0.15.227:6443: read: connection reset by peer
#2046104902404411392junit13 hours ago
2026-04-20T07:25:04Z node/ip-10-0-51-115.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsb9fbd-c29f6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-115.ec2.internal?timeout=10s - read tcp 10.0.51.115:54986->10.0.15.227:6443: read: connection reset by peer
2026-04-20T07:25:09Z node/ip-10-0-44-181.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsb9fbd-c29f6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-44-181.ec2.internal?timeout=10s - read tcp 10.0.44.181:59542->10.0.15.227:6443: read: connection reset by peer
2026-04-20T07:43:07Z node/ip-10-0-44-181.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsb9fbd-c29f6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-44-181.ec2.internal?timeout=10s - read tcp 10.0.44.181:53456->10.0.15.227:6443: read: connection reset by peer
2026-04-20T07:43:35Z node/ip-10-0-41-75.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsb9fbd-c29f6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-41-75.ec2.internal?timeout=10s - read tcp 10.0.41.75:50334->10.0.15.227:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-5.0-upgrade-from-stable-4.22-e2e-gcp-ovn-rt-upgrade (all) - 78 runs, 33% failed, 12% of failures match = 4% impact
#2046142680957718528junit8 hours ago
# [sig-storage] In-tree Volumes [Driver: nfs3] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
fail [k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:819]: Delete "https://api.ci-op-8s9h5g93-343b5.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/e2e-provisioning-83/persistentvolumeclaims/pvc-pwd6v": read tcp 10.131.102.9:47906->34.49.164.141:6443: read: connection reset by peer
#2045943046545084416junit21 hours ago
# [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
fail [k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:459]: DeferCleanup callback returned error: pod Delete API error: Delete "https://api.ci-op-fkvvdwyw-343b5.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/e2e-provisioning-7101/pods/pod-subpath-test-inlinevolume-h9ds": read tcp 10.130.42.230:49444->34.120.98.67:6443: read: connection reset by peer
#2045766838394032128junit33 hours ago
# [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
fail [k8s.io/kubernetes/test/e2e/apimachinery/crd_publish_openapi.go:477]: failed to wait for definition "com.example.crd-publish-openapi-test-multi-to-single-ver.v6alpha1.e2e-test-e2e-crd-publish-openapi-5796-2809-crd" not to be served anymore: failed to wait for OpenAPI spec validating condition: read tcp 10.130.43.110:35542->34.111.14.4:6443: read: connection reset by peer; lastMsg:
periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-fencing-recovery-3of3 (all) - 4 runs, 0% failed, 75% of runs match
#2046174269137752064junit8 hours ago
2026-04-20T11:59:28Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:55118->192.168.111.5:6443: read: connection reset by peer
2026-04-20T11:59:38Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045960392265437184junit22 hours ago
2026-04-19T22:03:57Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:54688->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:04:07Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045598000528494592junit46 hours ago
2026-04-18T21:56:48Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - context deadline exceeded
2026-04-18T22:15:15Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:33444->192.168.111.5:6443: read: connection reset by peer
2026-04-18T22:15:30Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045598000528494592junit46 hours ago
2026-04-18T22:21:24Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-18T22:32:07Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:47776->192.168.111.5:6443: read: connection reset by peer
2026-04-18T22:43:28Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-shiftstack-ci-release-4.19-e2e-openstack-ovn-etcd-scaling (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046180447775363072junit8 hours ago
2026-04-20T12:18:09Z node/xjtsii81-a1af9-9bj9g-worker-0-clndd - reason/FailedToUpdateLease https://api-int.xjtsii81-a1af9.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xjtsii81-a1af9-9bj9g-worker-0-clndd?timeout=10s - read tcp 10.0.2.100:51540->10.0.0.5:6443: read: connection reset by peer
2026-04-20T12:18:15Z node/xjtsii81-a1af9-9bj9g-worker-0-g42zm - reason/FailedToUpdateLease https://api-int.xjtsii81-a1af9.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xjtsii81-a1af9-9bj9g-worker-0-g42zm?timeout=10s - read tcp 10.0.1.240:57616->10.0.0.5:6443: read: connection reset by peer
2026-04-20T12:18:17Z node/xjtsii81-a1af9-9bj9g-worker-0-9bcns - reason/FailedToUpdateLease https://api-int.xjtsii81-a1af9.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xjtsii81-a1af9-9bj9g-worker-0-9bcns?timeout=10s - read tcp 10.0.3.212:44092->10.0.0.5:6443: read: connection reset by peer
2026-04-20T12:20:21Z node/xjtsii81-a1af9-9bj9g-master-2 - reason/FailedToUpdateLease https://api-int.xjtsii81-a1af9.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xjtsii81-a1af9-9bj9g-master-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-cluster-authentication-operator-release-4.20-periodics-e2e-aws-external-oidc-rollback-techpreview (all) - 2 runs, 0% failed, 100% of runs match
#2046159316922142720junit8 hours ago
2026-04-20T10:53:02Z node/ip-10-0-51-44.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0150rtq0-2e6e2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-44.us-east-2.compute.internal?timeout=10s - read tcp 10.0.51.44:41028->10.0.53.63:6443: read: connection reset by peer
2026-04-20T10:56:53Z node/ip-10-0-26-14.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0150rtq0-2e6e2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-14.us-east-2.compute.internal?timeout=10s - read tcp 10.0.26.14:51000->10.0.53.63:6443: read: connection reset by peer
2026-04-20T10:56:58Z node/ip-10-0-64-105.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0150rtq0-2e6e2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-64-105.us-east-2.compute.internal?timeout=10s - read tcp 10.0.64.105:52672->10.0.121.63:6443: read: connection reset by peer
2026-04-20T11:00:44Z node/ip-10-0-37-17.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0150rtq0-2e6e2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-37-17.us-east-2.compute.internal?timeout=10s - read tcp 10.0.37.17:57432->10.0.53.63:6443: read: connection reset by peer
2026-04-20T11:01:29Z node/ip-10-0-26-14.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0150rtq0-2e6e2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-14.us-east-2.compute.internal?timeout=10s - read tcp 10.0.26.14:56612->10.0.121.63:6443: read: connection reset by peer
2026-04-20T11:06:23Z node/ip-10-0-121-196.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0150rtq0-2e6e2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-121-196.us-east-2.compute.internal?timeout=10s - read tcp 10.0.121.196:49902->10.0.53.63:6443: read: connection reset by peer
2026-04-20T11:10:03Z node/ip-10-0-24-167.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0150rtq0-2e6e2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-167.us-east-2.compute.internal?timeout=10s - read tcp 10.0.24.167:43662->10.0.53.63:6443: read: connection reset by peer
2026-04-20T11:10:03Z node/ip-10-0-51-44.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0150rtq0-2e6e2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-44.us-east-2.compute.internal?timeout=10s - read tcp 10.0.51.44:43150->10.0.53.63:6443: read: connection reset by peer
2026-04-20T11:13:59Z node/ip-10-0-24-167.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0150rtq0-2e6e2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-167.us-east-2.compute.internal?timeout=10s - read tcp 10.0.24.167:33872->10.0.53.63:6443: read: connection reset by peer
2026-04-20T11:14:12Z node/ip-10-0-121-196.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0150rtq0-2e6e2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-121-196.us-east-2.compute.internal?timeout=10s - read tcp 10.0.121.196:41222->10.0.121.63:6443: read: connection reset by peer
2026-04-20T11:19:59Z node/ip-10-0-37-17.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0150rtq0-2e6e2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-37-17.us-east-2.compute.internal?timeout=10s - read tcp 10.0.37.17:52358->10.0.53.63:6443: read: connection reset by peer
2026-04-20T11:20:13Z node/ip-10-0-26-14.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0150rtq0-2e6e2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-14.us-east-2.compute.internal?timeout=10s - read tcp 10.0.26.14:51994->10.0.121.63:6443: read: connection reset by peer
2026-04-20T11:24:08Z node/ip-10-0-26-14.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0150rtq0-2e6e2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-14.us-east-2.compute.internal?timeout=10s - read tcp 10.0.26.14:55906->10.0.53.63:6443: read: connection reset by peer
2026-04-20T11:24:30Z node/ip-10-0-64-105.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-0150rtq0-2e6e2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-64-105.us-east-2.compute.internal?timeout=10s - read tcp 10.0.64.105:40402->10.0.53.63:6443: read: connection reset by peer

... 29 lines not shown

#2045795805968732160junit33 hours ago
2026-04-19T10:31:09Z node/ip-10-0-1-185.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48ipzpg5-2e6e2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-1-185.us-east-2.compute.internal?timeout=10s - read tcp 10.0.1.185:51540->10.0.37.208:6443: read: connection reset by peer
2026-04-19T10:31:45Z node/ip-10-0-72-115.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48ipzpg5-2e6e2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-72-115.us-east-2.compute.internal?timeout=10s - read tcp 10.0.72.115:53798->10.0.37.208:6443: read: connection reset by peer
2026-04-19T10:35:05Z node/ip-10-0-83-109.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48ipzpg5-2e6e2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-83-109.us-east-2.compute.internal?timeout=10s - read tcp 10.0.83.109:49494->10.0.37.208:6443: read: connection reset by peer
2026-04-19T10:35:19Z node/ip-10-0-72-115.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48ipzpg5-2e6e2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-72-115.us-east-2.compute.internal?timeout=10s - read tcp 10.0.72.115:55266->10.0.37.208:6443: read: connection reset by peer
2026-04-19T10:39:04Z node/ip-10-0-11-97.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48ipzpg5-2e6e2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-97.us-east-2.compute.internal?timeout=10s - read tcp 10.0.11.97:42222->10.0.101.30:6443: read: connection reset by peer
2026-04-19T10:39:06Z node/ip-10-0-54-38.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48ipzpg5-2e6e2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-54-38.us-east-2.compute.internal?timeout=10s - read tcp 10.0.54.38:34430->10.0.37.208:6443: read: connection reset by peer
2026-04-19T10:39:09Z node/ip-10-0-1-185.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48ipzpg5-2e6e2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-1-185.us-east-2.compute.internal?timeout=10s - read tcp 10.0.1.185:48838->10.0.101.30:6443: read: connection reset by peer
2026-04-19T10:39:15Z node/ip-10-0-11-97.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48ipzpg5-2e6e2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-97.us-east-2.compute.internal?timeout=10s - read tcp 10.0.11.97:41790->10.0.101.30:6443: read: connection reset by peer
2026-04-19T10:39:15Z node/ip-10-0-72-115.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48ipzpg5-2e6e2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-72-115.us-east-2.compute.internal?timeout=10s - read tcp 10.0.72.115:49826->10.0.101.30:6443: read: connection reset by peer
2026-04-19T10:44:06Z node/ip-10-0-1-185.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48ipzpg5-2e6e2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-1-185.us-east-2.compute.internal?timeout=10s - read tcp 10.0.1.185:54026->10.0.37.208:6443: read: connection reset by peer
2026-04-19T10:44:11Z node/ip-10-0-72-115.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48ipzpg5-2e6e2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-72-115.us-east-2.compute.internal?timeout=10s - read tcp 10.0.72.115:49432->10.0.101.30:6443: read: connection reset by peer
2026-04-19T10:47:55Z node/ip-10-0-11-97.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48ipzpg5-2e6e2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-97.us-east-2.compute.internal?timeout=10s - read tcp 10.0.11.97:50140->10.0.37.208:6443: read: connection reset by peer
2026-04-19T10:48:01Z node/ip-10-0-83-109.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48ipzpg5-2e6e2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-83-109.us-east-2.compute.internal?timeout=10s - read tcp 10.0.83.109:40638->10.0.37.208:6443: read: connection reset by peer
2026-04-19T10:48:01Z node/ip-10-0-1-185.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48ipzpg5-2e6e2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-1-185.us-east-2.compute.internal?timeout=10s - read tcp 10.0.1.185:34542->10.0.37.208:6443: read: connection reset by peer

... 39 lines not shown

periodic-ci-shiftstack-ci-release-4.17-e2e-openstack-ccpmso-zone (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046156184934682624junit8 hours ago
2026-04-20T11:44:13Z node/7gxj2h5i-c228a-v4rfg-worker-0-t26qf - reason/FailedToUpdateLease https://api-int.7gxj2h5i-c228a.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/7gxj2h5i-c228a-v4rfg-worker-0-t26qf?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T11:59:38Z node/7gxj2h5i-c228a-v4rfg-worker-0-2g6bn - reason/FailedToUpdateLease https://api-int.7gxj2h5i-c228a.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/7gxj2h5i-c228a-v4rfg-worker-0-2g6bn?timeout=10s - read tcp 10.0.0.111:37742->10.0.0.5:6443: read: connection reset by peer
2026-04-20T12:19:31Z node/7gxj2h5i-c228a-v4rfg-master-2qjpv-2 - reason/FailedToUpdateLease https://api-int.7gxj2h5i-c228a.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/7gxj2h5i-c228a-v4rfg-master-2qjpv-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-multiarch-main-nightly-4.20-ocp-e2e-upgrade-aws-ovn-multi-x-x (all) - 6 runs, 0% failed, 100% of runs match
#2046174129870082048junit8 hours ago
2026-04-20T11:37:29Z node/ip-10-0-43-82.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ccxp8iyn-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-43-82.ec2.internal?timeout=10s - read tcp 10.0.43.82:53290->10.0.85.245:6443: read: connection reset by peer
2026-04-20T11:42:15Z node/ip-10-0-43-82.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ccxp8iyn-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-43-82.ec2.internal?timeout=10s - read tcp 10.0.43.82:47580->10.0.16.130:6443: read: connection reset by peer
2026-04-20T11:45:39Z node/ip-10-0-43-82.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ccxp8iyn-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-43-82.ec2.internal?timeout=10s - read tcp 10.0.43.82:49318->10.0.85.245:6443: read: connection reset by peer
2026-04-20T12:28:09Z node/ip-10-0-96-187.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ccxp8iyn-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-96-187.ec2.internal?timeout=10s - read tcp 10.0.96.187:53934->10.0.16.130:6443: read: connection reset by peer
#2046041642078572544junit18 hours ago
2026-04-20T02:40:04Z node/ip-10-0-123-202.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bm5xcypd-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-123-202.ec2.internal?timeout=10s - read tcp 10.0.123.202:59132->10.0.14.33:6443: read: connection reset by peer
2026-04-20T02:40:04Z node/ip-10-0-105-56.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bm5xcypd-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-105-56.ec2.internal?timeout=10s - read tcp 10.0.105.56:52450->10.0.66.185:6443: read: connection reset by peer
2026-04-20T02:48:03Z node/ip-10-0-105-56.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bm5xcypd-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-105-56.ec2.internal?timeout=10s - read tcp 10.0.105.56:49456->10.0.66.185:6443: read: connection reset by peer
2026-04-20T03:06:01Z node/ip-10-0-34-99.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bm5xcypd-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-34-99.ec2.internal?timeout=10s - read tcp 10.0.34.99:47114->10.0.14.33:6443: read: connection reset by peer
2026-04-20T03:12:22Z node/ip-10-0-67-27.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bm5xcypd-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-67-27.ec2.internal?timeout=10s - read tcp 10.0.67.27:36054->10.0.14.33:6443: read: connection reset by peer
2026-04-20T03:12:27Z node/ip-10-0-18-172.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bm5xcypd-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-18-172.ec2.internal?timeout=10s - read tcp 10.0.18.172:37960->10.0.14.33:6443: read: connection reset by peer
#2045956877627428864junit23 hours ago
2026-04-19T21:21:18Z node/ip-10-0-58-121.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kx1911l2-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-121.us-east-2.compute.internal?timeout=10s - read tcp 10.0.58.121:58130->10.0.8.70:6443: read: connection reset by peer
2026-04-19T21:21:18Z node/ip-10-0-93-35.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kx1911l2-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-93-35.us-east-2.compute.internal?timeout=10s - read tcp 10.0.93.35:38292->10.0.93.165:6443: read: connection reset by peer
2026-04-19T21:21:19Z node/ip-10-0-84-13.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kx1911l2-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-84-13.us-east-2.compute.internal?timeout=10s - read tcp 10.0.84.13:52588->10.0.8.70:6443: read: connection reset by peer
2026-04-19T21:25:33Z node/ip-10-0-93-35.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kx1911l2-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-93-35.us-east-2.compute.internal?timeout=10s - read tcp 10.0.93.35:58370->10.0.93.165:6443: read: connection reset by peer
2026-04-19T21:51:13Z node/ip-10-0-81-204.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kx1911l2-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-81-204.us-east-2.compute.internal?timeout=10s - read tcp 10.0.81.204:40398->10.0.8.70:6443: read: connection reset by peer
2026-04-19T21:51:44Z node/ip-10-0-81-204.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kx1911l2-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-81-204.us-east-2.compute.internal?timeout=10s - read tcp 10.0.81.204:37032->10.0.93.165:6443: read: connection reset by peer
#2045822641402548224junit32 hours ago
2026-04-19T12:13:19Z node/ip-10-0-64-107.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pwqr8607-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-64-107.us-west-1.compute.internal?timeout=10s - read tcp 10.0.64.107:40764->10.0.32.37:6443: read: connection reset by peer
2026-04-19T12:13:21Z node/ip-10-0-31-80.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pwqr8607-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-31-80.us-west-1.compute.internal?timeout=10s - read tcp 10.0.31.80:39606->10.0.117.116:6443: read: connection reset by peer
2026-04-19T12:13:21Z node/ip-10-0-50-133.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pwqr8607-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-50-133.us-west-1.compute.internal?timeout=10s - read tcp 10.0.50.133:36198->10.0.117.116:6443: read: connection reset by peer
2026-04-19T12:21:06Z node/ip-10-0-27-166.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pwqr8607-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-27-166.us-west-1.compute.internal?timeout=10s - read tcp 10.0.27.166:41940->10.0.32.37:6443: read: connection reset by peer
2026-04-19T12:21:06Z node/ip-10-0-39-72.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pwqr8607-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-72.us-west-1.compute.internal?timeout=10s - read tcp 10.0.39.72:46374->10.0.32.37:6443: read: connection reset by peer
2026-04-19T12:39:03Z node/ip-10-0-120-160.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pwqr8607-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-160.us-west-1.compute.internal?timeout=10s - read tcp 10.0.120.160:60430->10.0.117.116:6443: read: connection reset by peer
2026-04-19T12:42:15Z node/ip-10-0-31-80.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-pwqr8607-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-31-80.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045686214572380160junit41 hours ago
Apr 19 03:18:28.702 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics: failed to apply / update (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics: Patch "https://api-int.ci-op-52fpp2ps-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-network-diagnostics/serviceaccounts/network-diagnostics?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.23.44:60212->10.0.63.173:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 19 03:18:28.702 - 28s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics: failed to apply / update (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics: Patch "https://api-int.ci-op-52fpp2ps-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-network-diagnostics/serviceaccounts/network-diagnostics?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.23.44:60212->10.0.63.173:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 19 03:18:56.762 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2045686214572380160junit41 hours ago
2026-04-19T03:10:29Z node/ip-10-0-40-178.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-52fpp2ps-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-40-178.ec2.internal?timeout=10s - read tcp 10.0.40.178:60554->10.0.120.226:6443: read: connection reset by peer
2026-04-19T03:14:37Z node/ip-10-0-94-115.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-52fpp2ps-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-94-115.ec2.internal?timeout=10s - read tcp 10.0.94.115:38930->10.0.63.173:6443: read: connection reset by peer
2026-04-19T03:18:18Z node/ip-10-0-40-178.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-52fpp2ps-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-40-178.ec2.internal?timeout=10s - read tcp 10.0.40.178:44756->10.0.63.173:6443: read: connection reset by peer
2026-04-19T03:18:19Z node/ip-10-0-23-44.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-52fpp2ps-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-44.ec2.internal?timeout=10s - read tcp 10.0.23.44:54982->10.0.63.173:6443: read: connection reset by peer
2026-04-19T03:18:28Z node/ip-10-0-119-160.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-52fpp2ps-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-119-160.ec2.internal?timeout=10s - read tcp 10.0.119.160:59790->10.0.120.226:6443: read: connection reset by peer
2026-04-19T03:29:05Z node/ip-10-0-125-188.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-52fpp2ps-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-125-188.ec2.internal?timeout=10s - read tcp 10.0.125.188:57500->10.0.120.226:6443: read: connection reset by peer
2026-04-19T03:29:15Z node/ip-10-0-125-188.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-52fpp2ps-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-125-188.ec2.internal?timeout=10s - read tcp 10.0.125.188:33406->10.0.63.173:6443: read: connection reset by peer
2026-04-19T03:32:31Z node/ip-10-0-23-44.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-52fpp2ps-26f25.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-44.ec2.internal?timeout=10s - context deadline exceeded
#2045605714851794944junit46 hours ago
2026-04-18T22:09:43Z node/ip-10-0-111-39.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rb2fhrit-26f25.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-111-39.us-west-2.compute.internal?timeout=10s - read tcp 10.0.111.39:50446->10.0.66.221:6443: read: connection reset by peer
2026-04-18T22:09:45Z node/ip-10-0-72-192.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rb2fhrit-26f25.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-72-192.us-west-2.compute.internal?timeout=10s - read tcp 10.0.72.192:47302->10.0.66.221:6443: read: connection reset by peer
2026-04-18T22:17:36Z node/ip-10-0-54-187.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rb2fhrit-26f25.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-54-187.us-west-2.compute.internal?timeout=10s - read tcp 10.0.54.187:55596->10.0.33.206:6443: read: connection reset by peer
2026-04-18T22:17:53Z node/ip-10-0-111-39.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rb2fhrit-26f25.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-111-39.us-west-2.compute.internal?timeout=10s - read tcp 10.0.111.39:38114->10.0.66.221:6443: read: connection reset by peer
2026-04-18T22:42:17Z node/ip-10-0-64-157.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rb2fhrit-26f25.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-64-157.us-west-2.compute.internal?timeout=10s - read tcp 10.0.64.157:42822->10.0.33.206:6443: read: connection reset by peer
2026-04-18T22:42:19Z node/ip-10-0-18-86.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rb2fhrit-26f25.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-18-86.us-west-2.compute.internal?timeout=10s - read tcp 10.0.18.86:55704->10.0.66.221:6443: read: connection reset by peer
2026-04-18T22:42:21Z node/ip-10-0-111-39.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rb2fhrit-26f25.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-111-39.us-west-2.compute.internal?timeout=10s - read tcp 10.0.111.39:49402->10.0.33.206:6443: read: connection reset by peer
2026-04-18T22:42:27Z node/ip-10-0-72-192.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-rb2fhrit-26f25.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-72-192.us-west-2.compute.internal?timeout=10s - read tcp 10.0.72.192:57090->10.0.66.221:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.14-e2e-aws-ovn-upgrade (all) - 4 runs, 25% failed, 400% of failures match = 100% impact
#2046163876118007808junit9 hours ago
2026-04-20T10:33:31Z node/ip-10-0-105-90.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kbsrg4s9-5cf44.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-105-90.us-west-1.compute.internal?timeout=10s - read tcp 10.0.105.90:60144->10.0.53.126:6443: read: connection reset by peer
2026-04-20T10:37:15Z node/ip-10-0-115-207.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kbsrg4s9-5cf44.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-115-207.us-west-1.compute.internal?timeout=10s - read tcp 10.0.115.207:54340->10.0.96.139:6443: read: connection reset by peer
2026-04-20T10:41:10Z node/ip-10-0-115-207.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kbsrg4s9-5cf44.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-115-207.us-west-1.compute.internal?timeout=10s - read tcp 10.0.115.207:48912->10.0.96.139:6443: read: connection reset by peer
2026-04-20T10:41:11Z node/ip-10-0-38-245.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kbsrg4s9-5cf44.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-38-245.us-west-1.compute.internal?timeout=10s - read tcp 10.0.38.245:36618->10.0.96.139:6443: read: connection reset by peer
2026-04-20T10:41:11Z node/ip-10-0-2-226.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kbsrg4s9-5cf44.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-2-226.us-west-1.compute.internal?timeout=10s - read tcp 10.0.2.226:37532->10.0.53.126:6443: read: connection reset by peer
#2046163876118007808junit9 hours ago
Apr 20 10:34:01.844 - 48s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io: failed to apply / update (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io: Patch "https://api-int.ci-op-kbsrg4s9-5cf44.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/multus.openshift.io?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.71.216:55570->10.0.96.139:6443: read: connection reset by peer
Apr 20 11:18:29.902 - 47s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni: Patch "https://api-int.ci-op-kbsrg4s9-5cf44.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-multus/roles/whereabouts-cni?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.38.245:58398->10.0.53.126:6443: read: connection reset by peer
Apr 20 11:24:04.797 - 47s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target: failed to apply / update (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target: Patch "https://api-int.ci-op-kbsrg4s9-5cf44.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/apps/v1/namespaces/openshift-network-diagnostics/daemonsets/network-check-target?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.38.245:48420->10.0.53.126:6443: read: connection reset by peer
#2046106422831222784junit12 hours ago
Apr 20 07:06:35.840 - 48s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config: failed to apply / update (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config: Patch "https://api-int.ci-op-5ft1s205-5cf44.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps/ovnkube-config?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.75.28:50986->10.0.91.22:6443: read: connection reset by peer
Apr 20 07:56:18.746 - 47s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io: failed to apply / update (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io: Patch "https://api-int.ci-op-5ft1s205-5cf44.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/overlappingrangeipreservations.whereabouts.cni.cncf.io?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.23.167:48462->10.0.27.225:6443: read: connection reset by peer
#2045898806385446912junit26 hours ago
2026-04-19T17:15:27Z node/ip-10-0-102-182.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7p9qymsz-5cf44.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-102-182.ec2.internal?timeout=10s - read tcp 10.0.102.182:60666->10.0.4.165:6443: read: connection reset by peer
#2045898806385446912junit26 hours ago
Apr 19 17:15:22.838 - 47s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity: Patch "https://api-int.ci-op-7p9qymsz-5cf44.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/network-node-identity?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.16.133:35858->10.0.80.11:6443: read: connection reset by peer
#2045898806385446912junit26 hours ago
2026-04-19T17:15:27Z node/ip-10-0-102-182.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7p9qymsz-5cf44.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-102-182.ec2.internal?timeout=10s - read tcp 10.0.102.182:60666->10.0.4.165:6443: read: connection reset by peer
#2045583336167968768junit47 hours ago
2026-04-18T20:22:50Z node/ip-10-0-108-33.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-gvtzyt4i-5cf44.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-108-33.us-west-2.compute.internal?timeout=10s - read tcp 10.0.108.33:44084->10.0.60.65:6443: read: connection reset by peer
2026-04-18T21:09:21Z node/ip-10-0-36-202.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-gvtzyt4i-5cf44.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-202.us-west-2.compute.internal?timeout=10s - read tcp 10.0.36.202:53222->10.0.107.225:6443: read: connection reset by peer
#2045583336167968768junit47 hours ago
2026-04-18T20:22:50Z node/ip-10-0-108-33.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-gvtzyt4i-5cf44.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-108-33.us-west-2.compute.internal?timeout=10s - read tcp 10.0.108.33:44084->10.0.60.65:6443: read: connection reset by peer
2026-04-18T21:09:21Z node/ip-10-0-36-202.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-gvtzyt4i-5cf44.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-202.us-west-2.compute.internal?timeout=10s - read tcp 10.0.36.202:53222->10.0.107.225:6443: read: connection reset by peer
#2045583336167968768junit47 hours ago
Apr 18 21:09:11.244 - 47s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity: Patch "https://api-int.ci-op-gvtzyt4i-5cf44.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/network-node-identity?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.33.162:59540->10.0.107.225:6443: read: connection reset by peer
periodic-ci-openshift-multiarch-main-nightly-4.22-ocp-e2e-aws-ovn-multi-x-ax (all) - 7 runs, 43% failed, 33% of failures match = 14% impact
#2046157898345615360junit9 hours ago
    <*fmt.wrapError | 0xc009cd8fc0>:
    error reading from error stream: read message: read tcp 172.24.102.9:38420->54.183.110.122:6443: read: connection reset by peer
    {
        msg: "error reading from error stream: read message: read tcp 172.24.102.9:38420->54.183.110.122:6443: read: connection reset by peer",
        err: <*fmt.wrapError | 0xc009cd8fa0>{
            msg: "read message: read tcp 172.24.102.9:38420->54.183.110.122:6443: read: connection reset by peer",
            err: <*net.OpError | 0xc0010ac780>{
periodic-ci-openshift-cluster-authentication-operator-release-4.20-periodics-e2e-aws-external-oidc-uid-extra-techpreview (all) - 2 runs, 0% failed, 100% of runs match
#2046159318608252928junit9 hours ago
2026-04-20T10:55:24Z node/ip-10-0-65-171.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37vb9g9y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-65-171.us-west-1.compute.internal?timeout=10s - read tcp 10.0.65.171:44616->10.0.46.2:6443: read: connection reset by peer
2026-04-20T10:59:14Z node/ip-10-0-9-207.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37vb9g9y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-207.us-west-1.compute.internal?timeout=10s - read tcp 10.0.9.207:51154->10.0.46.2:6443: read: connection reset by peer
2026-04-20T10:59:22Z node/ip-10-0-126-33.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37vb9g9y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-126-33.us-west-1.compute.internal?timeout=10s - read tcp 10.0.126.33:34002->10.0.89.125:6443: read: connection reset by peer
2026-04-20T10:59:24Z node/ip-10-0-125-0.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37vb9g9y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-125-0.us-west-1.compute.internal?timeout=10s - read tcp 10.0.125.0:56818->10.0.89.125:6443: read: connection reset by peer
2026-04-20T11:04:36Z node/ip-10-0-65-171.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37vb9g9y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-65-171.us-west-1.compute.internal?timeout=10s - read tcp 10.0.65.171:36122->10.0.46.2:6443: read: connection reset by peer
2026-04-20T11:08:24Z node/ip-10-0-30-130.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37vb9g9y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-130.us-west-1.compute.internal?timeout=10s - read tcp 10.0.30.130:36688->10.0.89.125:6443: read: connection reset by peer
2026-04-20T11:12:13Z node/ip-10-0-65-171.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37vb9g9y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-65-171.us-west-1.compute.internal?timeout=10s - read tcp 10.0.65.171:44956->10.0.89.125:6443: read: connection reset by peer
2026-04-20T11:18:36Z node/ip-10-0-30-130.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37vb9g9y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-130.us-west-1.compute.internal?timeout=10s - read tcp 10.0.30.130:57848->10.0.46.2:6443: read: connection reset by peer
2026-04-20T11:22:37Z node/ip-10-0-65-171.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37vb9g9y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-65-171.us-west-1.compute.internal?timeout=10s - read tcp 10.0.65.171:56870->10.0.46.2:6443: read: connection reset by peer
2026-04-20T11:26:21Z node/ip-10-0-65-171.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37vb9g9y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-65-171.us-west-1.compute.internal?timeout=10s - read tcp 10.0.65.171:37342->10.0.46.2:6443: read: connection reset by peer
2026-04-20T11:26:27Z node/ip-10-0-125-0.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37vb9g9y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-125-0.us-west-1.compute.internal?timeout=10s - read tcp 10.0.125.0:53872->10.0.46.2:6443: read: connection reset by peer
2026-04-20T11:32:02Z node/ip-10-0-82-166.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37vb9g9y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-82-166.us-west-1.compute.internal?timeout=10s - read tcp 10.0.82.166:38814->10.0.89.125:6443: read: connection reset by peer
2026-04-20T11:32:08Z node/ip-10-0-65-171.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37vb9g9y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-65-171.us-west-1.compute.internal?timeout=10s - read tcp 10.0.65.171:43772->10.0.89.125:6443: read: connection reset by peer
2026-04-20T11:32:34Z node/ip-10-0-9-207.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-37vb9g9y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-207.us-west-1.compute.internal?timeout=10s - read tcp 10.0.9.207:42224->10.0.46.2:6443: read: connection reset by peer

... 15 lines not shown

#2045795806052618240junit33 hours ago
2026-04-19T10:38:17Z node/ip-10-0-127-186.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-88yd557y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-186.ec2.internal?timeout=10s - read tcp 10.0.127.186:42718->10.0.13.152:6443: read: connection reset by peer
2026-04-19T10:38:27Z node/ip-10-0-127-186.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-88yd557y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-186.ec2.internal?timeout=10s - read tcp 10.0.127.186:42966->10.0.13.152:6443: read: connection reset by peer
2026-04-19T10:42:09Z node/ip-10-0-67-238.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-88yd557y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-67-238.ec2.internal?timeout=10s - read tcp 10.0.67.238:42106->10.0.72.78:6443: read: connection reset by peer
2026-04-19T10:42:23Z node/ip-10-0-127-186.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-88yd557y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-186.ec2.internal?timeout=10s - read tcp 10.0.127.186:60920->10.0.13.152:6443: read: connection reset by peer
2026-04-19T10:47:53Z node/ip-10-0-53-5.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-88yd557y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-5.ec2.internal?timeout=10s - read tcp 10.0.53.5:34018->10.0.72.78:6443: read: connection reset by peer
2026-04-19T10:51:41Z node/ip-10-0-67-238.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-88yd557y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-67-238.ec2.internal?timeout=10s - read tcp 10.0.67.238:57020->10.0.72.78:6443: read: connection reset by peer
2026-04-19T10:51:43Z node/ip-10-0-127-186.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-88yd557y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-186.ec2.internal?timeout=10s - read tcp 10.0.127.186:52482->10.0.72.78:6443: read: connection reset by peer
2026-04-19T10:51:48Z node/ip-10-0-53-5.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-88yd557y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-5.ec2.internal?timeout=10s - read tcp 10.0.53.5:52678->10.0.72.78:6443: read: connection reset by peer
2026-04-19T10:55:41Z node/ip-10-0-53-5.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-88yd557y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-5.ec2.internal?timeout=10s - read tcp 10.0.53.5:50930->10.0.13.152:6443: read: connection reset by peer
2026-04-19T11:01:56Z node/ip-10-0-127-186.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-88yd557y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-186.ec2.internal?timeout=10s - read tcp 10.0.127.186:42848->10.0.13.152:6443: read: connection reset by peer
2026-04-19T11:06:05Z node/ip-10-0-53-5.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-88yd557y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-5.ec2.internal?timeout=10s - read tcp 10.0.53.5:49292->10.0.13.152:6443: read: connection reset by peer
2026-04-19T11:09:47Z node/ip-10-0-26-123.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-88yd557y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-123.ec2.internal?timeout=10s - read tcp 10.0.26.123:40238->10.0.13.152:6443: read: connection reset by peer
2026-04-19T11:10:00Z node/ip-10-0-53-5.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-88yd557y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-5.ec2.internal?timeout=10s - read tcp 10.0.53.5:39190->10.0.13.152:6443: read: connection reset by peer
2026-04-19T11:15:24Z node/ip-10-0-26-123.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-88yd557y-a3910.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-123.ec2.internal?timeout=10s - read tcp 10.0.26.123:50814->10.0.13.152:6443: read: connection reset by peer

... 14 lines not shown

pull-ci-opendatahub-io-opendatahub-operator-main-opendatahub-operator-rhoai-e2e (all) - 21 runs, 86% failed, 6% of failures match = 5% impact
#2046183699283709952junit9 hours ago
2026/04/20 11:33:19   11:31:54 INFO Pod/spark-operator-webhook-56f78bd888-vmpx8: Pulled - Container image "quay.io/opendatahub/spark-operator:v2.4.0-odh-3.4.0-GA" already present on machine
2026/04/20 11:33:19   11:31:54 WARN Pod/kserve-controller-manager-5444c68745-99g76: FailedMount - MountVolume.SetUp failed for volume "kube-api-access-5hht2" : failed to fetch token: [REDACTED] "https://api-int.odh-4-20-aws-kjdm6.openshift-ci-aws.rhaiseng.com:6443/api/v1/namespaces/redhat-ods-applications/serviceaccounts/kserve-controller-manager/token": read tcp 10.0.24.218:42418->10.0.36.85:6443: read: connection reset by peer
2026/04/20 11:33:19   11:31:54 INFO Pod/spark-operator-webhook-56f78bd888-vmpx8: Started - Started container webhook
#2046183699283709952junit9 hours ago
2026/04/20 11:33:12   11:31:54 INFO Pod/spark-operator-webhook-56f78bd888-vmpx8: Pulled - Container image "quay.io/opendatahub/spark-operator:v2.4.0-odh-3.4.0-GA" already present on machine
2026/04/20 11:33:12   11:31:54 WARN Pod/kserve-controller-manager-5444c68745-99g76: FailedMount - MountVolume.SetUp failed for volume "kube-api-access-5hht2" : failed to fetch token: [REDACTED] "https://api-int.odh-4-20-aws-kjdm6.openshift-ci-aws.rhaiseng.com:6443/api/v1/namespaces/redhat-ods-applications/serviceaccounts/kserve-controller-manager/token": read tcp 10.0.24.218:42418->10.0.36.85:6443: read: connection reset by peer
2026/04/20 11:33:12   11:31:54 INFO Pod/spark-operator-webhook-56f78bd888-vmpx8: Started - Started container webhook
#2046183699283709952junit9 hours ago
2026/04/20 11:33:19   11:31:54 INFO Pod/spark-operator-webhook-56f78bd888-vmpx8: Pulled - Container image "quay.io/opendatahub/spark-operator:v2.4.0-odh-3.4.0-GA" already present on machine
2026/04/20 11:33:19   11:31:54 WARN Pod/kserve-controller-manager-5444c68745-99g76: FailedMount - MountVolume.SetUp failed for volume "kube-api-access-5hht2" : failed to fetch token: [REDACTED] "https://api-int.odh-4-20-aws-kjdm6.openshift-ci-aws.rhaiseng.com:6443/api/v1/namespaces/redhat-ods-applications/serviceaccounts/kserve-controller-manager/token": read tcp 10.0.24.218:42418->10.0.36.85:6443: read: connection reset by peer
2026/04/20 11:33:19   11:31:54 INFO Pod/spark-operator-webhook-56f78bd888-vmpx8: Started - Started container webhook
pull-ci-openshift-cluster-kube-apiserver-operator-main-e2e-aws-ovn-serial-2of2 (all) - 23 runs, 13% failed, 33% of failures match = 4% impact
#2046140029088043008junit9 hours ago
# [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2
fail [k8s.io/kubernetes/test/e2e/framework/pod/exec_util.go:113]: failed to execute command in pod iperf2-clients-w4gbn, container app: error reading from error stream: next reader: read tcp 172.24.122.8:40884->52.26.188.250:6443: read: connection reset by peer: error reading from error stream: next reader: read tcp 172.24.122.8:40884->52.26.188.250:6443: read: connection reset by peer
periodic-ci-openshift-ovn-kubernetes-release-4.22-periodics-e2e-metal-ipi-ovn-bgp-virt-ipv4 (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#2046121801469136896junit10 hours ago
2026-04-20T09:00:38Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:52104->192.168.111.5:6443: read: connection reset by peer
2026-04-20T09:00:38Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:43632->192.168.111.5:6443: read: connection reset by peer
2026-04-20T09:00:39Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:52480->192.168.111.5:6443: read: connection reset by peer
2026-04-20T09:00:44Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:37322->192.168.111.5:6443: read: connection reset by peer
2026-04-20T09:00:55Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
pull-ci-openshift-cluster-api-operator-release-4.20-e2e-aws-ovn-techpreview (all) - 1 runs, 0% failed, 100% of runs match
#2046147599789985792junit10 hours ago
2026-04-20T09:35:19Z node/ip-10-0-0-121.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qhmqp47w-5410b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-121.ec2.internal?timeout=10s - read tcp 10.0.0.121:51186->10.0.28.84:6443: read: connection reset by peer
2026-04-20T09:35:22Z node/ip-10-0-43-48.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qhmqp47w-5410b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-43-48.ec2.internal?timeout=10s - read tcp 10.0.43.48:46880->10.0.28.84:6443: read: connection reset by peer
2026-04-20T09:43:43Z node/ip-10-0-11-19.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qhmqp47w-5410b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-19.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046147599789985792junit10 hours ago
2026-04-20T09:43:43Z node/ip-10-0-23-1.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qhmqp47w-5410b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-1.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T09:46:03Z node/ip-10-0-0-121.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qhmqp47w-5410b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-121.ec2.internal?timeout=10s - read tcp 10.0.0.121:45244->10.0.28.84:6443: read: connection reset by peer
2026-04-20T09:46:09Z node/ip-10-0-21-146.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qhmqp47w-5410b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-21-146.ec2.internal?timeout=10s - read tcp 10.0.21.146:54326->10.0.28.84:6443: read: connection reset by peer
2026-04-20T09:49:30Z node/ip-10-0-51-56.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qhmqp47w-5410b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-56.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-nightly-4.14-e2e-aws-ovn-serial (all) - 3 runs, 33% failed, 300% of failures match = 100% impact
#2046153305062641664junit10 hours ago
Apr 20 10:47:55.136 - 49s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (/v1, Kind=Namespace) /openshift-host-network: failed to apply / update (/v1, Kind=Namespace) /openshift-host-network: Patch "https://api-int.ci-op-cfpy6jsq-13283.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-host-network?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.117.208:45620->10.0.98.170:6443: read: connection reset by peer
Apr 20 10:52:03.130 - 49s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2: Patch "https://api-int.ci-op-cfpy6jsq-13283.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-network-node-identity/rolebindings/system:openshift:scc:hostnetwork-v2?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.117.208:57852->10.0.98.170:6443: read: connection reset by peer
Apr 20 11:04:24.433 - 49s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (/v1, Kind=Namespace) /openshift-network-node-identity: failed to apply / update (/v1, Kind=Namespace) /openshift-network-node-identity: Patch "https://api-int.ci-op-cfpy6jsq-13283.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-network-node-identity?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.117.208:49554->10.0.98.170:6443: read: connection reset by peer
#2046106421484851200junit13 hours ago
2026-04-20T08:10:29Z node/ip-10-0-102-23.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w31dyhiz-13283.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-102-23.us-west-1.compute.internal?timeout=10s - read tcp 10.0.102.23:46124->10.0.104.23:6443: read: connection reset by peer
2026-04-20T08:10:35Z node/ip-10-0-52-223.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w31dyhiz-13283.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-223.us-west-1.compute.internal?timeout=10s - read tcp 10.0.52.223:55240->10.0.37.150:6443: read: connection reset by peer
2026-04-20T08:14:22Z node/ip-10-0-26-121.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w31dyhiz-13283.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-121.us-west-1.compute.internal?timeout=10s - read tcp 10.0.26.121:38836->10.0.37.150:6443: read: connection reset by peer
2026-04-20T08:18:49Z node/ip-10-0-43-199.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w31dyhiz-13283.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-43-199.us-west-1.compute.internal?timeout=10s - read tcp 10.0.43.199:37660->10.0.37.150:6443: read: connection reset by peer
2026-04-20T08:26:43Z node/ip-10-0-71-51.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w31dyhiz-13283.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-51.us-west-1.compute.internal?timeout=10s - read tcp 10.0.71.51:40784->10.0.104.23:6443: read: connection reset by peer
2026-04-20T08:26:48Z node/ip-10-0-26-121.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-w31dyhiz-13283.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-121.us-west-1.compute.internal?timeout=10s - read tcp 10.0.26.121:50376->10.0.37.150:6443: read: connection reset by peer
#2045898806213480448junit27 hours ago
2026-04-19T17:58:39Z node/ip-10-0-11-130.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-15sr5ynq-13283.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-130.ec2.internal?timeout=10s - read tcp 10.0.11.130:51222->10.0.99.200:6443: read: connection reset by peer
2026-04-19T17:58:40Z node/ip-10-0-75-28.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-15sr5ynq-13283.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-75-28.ec2.internal?timeout=10s - read tcp 10.0.75.28:56360->10.0.3.104:6443: read: connection reset by peer
2026-04-19T17:58:44Z node/ip-10-0-52-106.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-15sr5ynq-13283.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-106.ec2.internal?timeout=10s - read tcp 10.0.52.106:47958->10.0.3.104:6443: read: connection reset by peer
2026-04-19T18:11:08Z node/ip-10-0-53-77.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-15sr5ynq-13283.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-77.ec2.internal?timeout=10s - read tcp 10.0.53.77:44612->10.0.3.104:6443: read: connection reset by peer
2026-04-19T18:11:14Z node/ip-10-0-11-130.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-15sr5ynq-13283.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-130.ec2.internal?timeout=10s - read tcp 10.0.11.130:48662->10.0.99.200:6443: read: connection reset by peer
2026-04-19T18:19:04Z node/ip-10-0-75-28.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-15sr5ynq-13283.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-75-28.ec2.internal?timeout=10s - read tcp 10.0.75.28:52212->10.0.99.200:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-fencing-upgrade (all) - 4 runs, 100% failed, 50% of failures match = 50% impact
#2046155177957789696junit10 hours ago
2026-04-20T11:07:28Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:42088->192.168.111.5:6443: read: connection reset by peer
2026-04-20T11:14:31Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:34102->192.168.111.5:6443: read: connection reset by peer
#2045792786657054720junit30 hours ago
2026-04-19T11:42:07Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:52790->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-5.0-e2e-metal-ovn-two-node-fencing-upgrade (all) - 4 runs, 100% failed, 75% of failures match = 75% impact
#2046155180348542976junit10 hours ago
2026-04-20T11:15:25Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:52234->192.168.111.5:6443: read: connection reset by peer
#2045943782293114880junit24 hours ago
2026-04-19T21:12:41Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:39266->192.168.111.5:6443: read: connection reset by peer
#2045792786900324352junit30 hours ago
2026-04-19T11:37:04Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:45118->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.12-upgrade-from-stable-4.11-e2e-aws-sdn-upgrade (all) - 3 runs, 67% failed, 50% of failures match = 33% impact
#2046145594245779456junit10 hours ago
Apr 20 09:56:24.800 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-sw7kk node/ip-10-0-178-184.ec2.internal uid/7bab259a-df3d-4159-a025-05116d4aa76a container/csi-driver reason/ContainerExit code/2 cause/Error
Apr 20 09:56:24.832 E ns/openshift-sdn pod/sdn-controller-457tk node/ip-10-0-193-156.ec2.internal uid/7d997999-00d8-49cf-9988-4067ed841342 container/sdn-controller reason/ContainerExit code/2 cause/Error I0420 08:55:39.423113       1 server.go:27] Starting HTTP metrics server\nI0420 08:55:39.423232       1 leaderelection.go:248] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0420 09:09:52.496278       1 request.go:1085] Unexpected error when reading response body: read tcp 10.0.193.156:49062->10.0.252.198:6443: read: connection reset by peer\nE0420 09:09:52.496465       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: unexpected error when reading response body. Please retry. Original error: read tcp 10.0.193.156:49062->10.0.252.198:6443: read: connection reset by peer\n
Apr 20 09:56:25.823 E ns/openshift-multus pod/multus-additional-cni-plugins-ltdrh node/ip-10-0-178-184.ec2.internal uid/13dd0c3f-f691-4a0b-9dfd-2a7a7e80f6e0 container/kube-multus-additional-cni-plugins reason/ContainerExit code/143 cause/Error
periodic-ci-openshift-release-main-nightly-4.13-e2e-aws-sdn-serial (all) - 3 runs, 33% failed, 100% of failures match = 33% impact
#2046134275916435456junit11 hours ago
Apr 20 09:34:47.345 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-161-184.us-east-2.compute.internal node/ip-10-0-161-184.us-east-2.compute.internal uid/e941cd1c-c951-472f-b14c-e2e89d9cae6f container/kube-apiserver-check-endpoints reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Apr 20 09:37:28.147 E clusteroperator/network condition/Degraded status/True reason/ApplyOperatorConfig changed: Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /cloud-network-config-controller: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /cloud-network-config-controller: Patch "https://api-int.ci-op-pxr8dzcg-a3593.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cloud-network-config-controller?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.159.10:40718->10.0.200.45:6443: read: connection reset by peer
Apr 20 09:37:28.147 - 35s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /cloud-network-config-controller: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /cloud-network-config-controller: Patch "https://api-int.ci-op-pxr8dzcg-a3593.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cloud-network-config-controller?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.159.10:40718->10.0.200.45:6443: read: connection reset by peer
Apr 20 09:38:39.065 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-231-175.us-east-2.compute.internal node/ip-10-0-231-175.us-east-2.compute.internal uid/4726e0cf-1317-454c-b017-d9ac5f597718 container/kube-apiserver-cert-syncer reason/ContainerExit code/2 cause/Error rue} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0420 09:31:33.618566       1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nI0420 09:31:33.618983       1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0420 09:31:34.418766       1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nI0420 09:31:34.422844       1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
#2046134275916435456junit11 hours ago
Apr 20 09:37:28.147 - 35s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /cloud-network-config-controller: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /cloud-network-config-controller: Patch "https://api-int.ci-op-pxr8dzcg-a3593.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cloud-network-config-controller?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.159.10:40718->10.0.200.45:6443: read: connection reset by peer
pull-ci-openshift-ovn-kubernetes-release-4.16-e2e-aws-ovn-serial (all) - 1 runs, 0% failed, 100% of runs match
#2046104903880806400junit12 hours ago
Apr 20 08:57:31.440 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding: Patch "https://api-int.ci-op-ivsb9fbd-edcc3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/metrics-daemon-sa-rolebinding?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.13.35:52216->10.0.37.240:6443: read: connection reset by peer (exception: We are not worried about Available=False or Degraded=True blips for stable-system tests yet.)
Apr 20 08:57:31.440 - 25s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding: Patch "https://api-int.ci-op-ivsb9fbd-edcc3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/metrics-daemon-sa-rolebinding?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.13.35:52216->10.0.37.240:6443: read: connection reset by peer (exception: We are not worried about Available=False or Degraded=True blips for stable-system tests yet.)
Apr 20 08:57:56.849 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2046104903880806400junit12 hours ago
2026-04-20T08:44:46Z node/ip-10-0-4-68.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsb9fbd-edcc3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-4-68.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T08:57:13Z node/ip-10-0-4-68.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsb9fbd-edcc3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-4-68.us-east-2.compute.internal?timeout=10s - read tcp 10.0.4.68:36994->10.0.37.240:6443: read: connection reset by peer
2026-04-20T09:01:18Z node/ip-10-0-31-231.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsb9fbd-edcc3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-31-231.us-east-2.compute.internal?timeout=10s - read tcp 10.0.31.231:37732->10.0.37.240:6443: read: connection reset by peer
2026-04-20T09:05:54Z node/ip-10-0-31-231.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsb9fbd-edcc3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-31-231.us-east-2.compute.internal?timeout=10s - read tcp 10.0.31.231:34656->10.0.37.240:6443: read: connection reset by peer
2026-04-20T09:09:51Z node/ip-10-0-4-205.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsb9fbd-edcc3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-4-205.us-east-2.compute.internal?timeout=10s - read tcp 10.0.4.205:57134->10.0.37.240:6443: read: connection reset by peer
2026-04-20T09:13:46Z node/ip-10-0-4-205.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsb9fbd-edcc3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-4-205.us-east-2.compute.internal?timeout=10s - read tcp 10.0.4.205:56978->10.0.37.240:6443: read: connection reset by peer
pull-ci-openshift-ovn-kubernetes-release-4.16-4.16-upgrade-from-stable-4.15-local-gateway-e2e-aws-ovn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#2046104902312136704junit11 hours ago
2026-04-20T07:08:47Z node/ip-10-0-0-25.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5620fggh-b310e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-25.us-west-1.compute.internal?timeout=10s - read tcp 10.0.0.25:37086->10.0.20.161:6443: read: connection reset by peer
2026-04-20T07:08:55Z node/ip-10-0-17-210.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5620fggh-b310e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-17-210.us-west-1.compute.internal?timeout=10s - read tcp 10.0.17.210:44784->10.0.20.161:6443: read: connection reset by peer
2026-04-20T07:22:52Z node/ip-10-0-17-210.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5620fggh-b310e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-17-210.us-west-1.compute.internal?timeout=10s - read tcp 10.0.17.210:56604->10.0.20.161:6443: read: connection reset by peer
2026-04-20T07:26:57Z node/ip-10-0-17-210.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5620fggh-b310e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-17-210.us-west-1.compute.internal?timeout=10s - read tcp 10.0.17.210:55268->10.0.20.161:6443: read: connection reset by peer
#2046104902312136704junit11 hours ago
2026-04-20T07:08:47Z node/ip-10-0-0-25.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5620fggh-b310e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-25.us-west-1.compute.internal?timeout=10s - read tcp 10.0.0.25:37086->10.0.20.161:6443: read: connection reset by peer
2026-04-20T07:08:55Z node/ip-10-0-17-210.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5620fggh-b310e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-17-210.us-west-1.compute.internal?timeout=10s - read tcp 10.0.17.210:44784->10.0.20.161:6443: read: connection reset by peer
2026-04-20T07:22:52Z node/ip-10-0-17-210.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5620fggh-b310e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-17-210.us-west-1.compute.internal?timeout=10s - read tcp 10.0.17.210:56604->10.0.20.161:6443: read: connection reset by peer
2026-04-20T07:26:57Z node/ip-10-0-17-210.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-5620fggh-b310e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-17-210.us-west-1.compute.internal?timeout=10s - read tcp 10.0.17.210:55268->10.0.20.161:6443: read: connection reset by peer
pull-ci-openshift-cluster-network-operator-release-4.19-4.19-upgrade-from-stable-4.18-e2e-aws-ovn-upgrade-ipsec (all) - 1 runs, 0% failed, 100% of runs match
#2046110927064928256junit12 hours ago
2026-04-20T07:14:11Z node/ip-10-0-39-24.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-9xd32638-0c51c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-24.ec2.internal?timeout=10s - read tcp 10.0.39.24:60826->10.0.42.91:6443: read: connection reset by peer
2026-04-20T07:21:31Z node/ip-10-0-31-178.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-9xd32638-0c51c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-31-178.ec2.internal?timeout=10s - read tcp 10.0.31.178:53970->10.0.42.91:6443: read: connection reset by peer
2026-04-20T07:42:11Z node/ip-10-0-49-141.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-9xd32638-0c51c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-49-141.ec2.internal?timeout=10s - read tcp 10.0.49.141:47462->10.0.42.91:6443: read: connection reset by peer
2026-04-20T07:45:35Z node/ip-10-0-49-141.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-9xd32638-0c51c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-49-141.ec2.internal?timeout=10s - read tcp 10.0.49.141:44798->10.0.42.91:6443: read: connection reset by peer
2026-04-20T07:45:41Z node/ip-10-0-39-24.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-9xd32638-0c51c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-24.ec2.internal?timeout=10s - read tcp 10.0.39.24:51376->10.0.42.91:6443: read: connection reset by peer
2026-04-20T07:49:41Z node/ip-10-0-44-57.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-9xd32638-0c51c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-44-57.ec2.internal?timeout=10s - read tcp 10.0.44.57:37388->10.0.42.91:6443: read: connection reset by peer
2026-04-20T07:49:45Z node/ip-10-0-31-178.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-9xd32638-0c51c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-31-178.ec2.internal?timeout=10s - read tcp 10.0.31.178:35978->10.0.42.91:6443: read: connection reset by peer
2026-04-20T08:38:22Z node/ip-10-0-39-24.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-9xd32638-0c51c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-24.ec2.internal?timeout=10s - read tcp 10.0.39.24:55394->10.0.42.91:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.17-e2e-aws-ovn-serial (all) - 3 runs, 33% failed, 200% of failures match = 67% impact
#2046098892327489536junit12 hours ago
2026-04-20T07:47:08Z node/ip-10-0-3-175.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ii4qt384-73bab.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-3-175.us-west-1.compute.internal?timeout=10s - read tcp 10.0.3.175:55326->10.0.27.151:6443: read: connection reset by peer
2026-04-20T07:47:10Z node/ip-10-0-4-23.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ii4qt384-73bab.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-4-23.us-west-1.compute.internal?timeout=10s - read tcp 10.0.4.23:50566->10.0.27.151:6443: read: connection reset by peer
2026-04-20T07:47:13Z node/ip-10-0-71-247.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ii4qt384-73bab.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-247.us-west-1.compute.internal?timeout=10s - read tcp 10.0.71.247:40494->10.0.27.151:6443: read: connection reset by peer
2026-04-20T07:51:05Z node/ip-10-0-4-23.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ii4qt384-73bab.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-4-23.us-west-1.compute.internal?timeout=10s - read tcp 10.0.4.23:39874->10.0.80.222:6443: read: connection reset by peer
2026-04-20T07:55:44Z node/ip-10-0-4-181.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ii4qt384-73bab.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-4-181.us-west-1.compute.internal?timeout=10s - read tcp 10.0.4.181:59444->10.0.80.222:6443: read: connection reset by peer
#2046098892327489536junit12 hours ago
Apr 20 07:51:14.267 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org: failed to apply / update (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org: Patch "https://api-int.ci-op-ii4qt384-73bab.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/egressips.k8s.ovn.org?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.71.247:48966->10.0.80.222:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for stable-system tests yet.)
Apr 20 07:51:14.267 - 26s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org: failed to apply / update (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org: Patch "https://api-int.ci-op-ii4qt384-73bab.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/egressips.k8s.ovn.org?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.71.247:48966->10.0.80.222:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for stable-system tests yet.)
Apr 20 07:51:41.072 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2045884051411177472junit27 hours ago
2026-04-19T17:44:31Z node/ip-10-0-84-236.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-73dfxh91-73bab.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-84-236.ec2.internal?timeout=10s - read tcp 10.0.84.236:41362->10.0.94.57:6443: read: connection reset by peer
2026-04-19T17:57:07Z node/ip-10-0-84-236.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-73dfxh91-73bab.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-84-236.ec2.internal?timeout=10s - read tcp 10.0.84.236:47876->10.0.94.57:6443: read: connection reset by peer
2026-04-19T17:57:11Z node/ip-10-0-54-4.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-73dfxh91-73bab.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-54-4.ec2.internal?timeout=10s - read tcp 10.0.54.4:40984->10.0.35.176:6443: read: connection reset by peer
periodic-ci-openshift-ovn-kubernetes-release-4.23-periodics-e2e-metal-ipi-ovn-dualstack-bgp-local-gw (all) - 2 runs, 0% failed, 100% of runs match
#2046076730954747904junit12 hours ago
2026-04-20T06:36:18Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:33682->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:36:19Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:55146->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:36:22Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.21:50462->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:36:22Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - write tcp 192.168.111.23:40810->192.168.111.5:6443: write: connection reset by peer
2026-04-20T06:36:24Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:42856->192.168.111.5:6443: read: connection reset by peer
#2045714271479795712junit37 hours ago
2026-04-19T06:27:55Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:47942->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:28:00Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:57904->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:33:28Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:43748->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:33:31Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:35386->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:33:33Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:41778->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:33:37Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:46394->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-ovn-kubernetes-release-4.22-periodics-e2e-metal-ipi-ovn-dualstack-bgp-local-gw (all) - 2 runs, 50% failed, 200% of failures match = 100% impact
#2046076713598717952junit13 hours ago
2026-04-20T06:19:39Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T06:33:42Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:37532->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:33:43Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:44224->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:33:47Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:40052->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:33:51Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.21:47958->192.168.111.5:6443: read: connection reset by peer
#2045714268170489856junit36 hours ago
2026-04-19T06:33:22Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:53530->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:33:25Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:47490->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:33:28Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:33942->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:33:28Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.21:51762->192.168.111.5:6443: read: connection reset by peer
pull-ci-openshift-cluster-network-operator-release-4.19-e2e-aws-ovn-serial-2of2 (all) - 1 runs, 0% failed, 100% of runs match
#2046110927241089024junit13 hours ago
2026-04-20T08:02:27Z node/ip-10-0-19-217.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-t5r1t6m6-39e31.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-19-217.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T08:18:13Z node/ip-10-0-45-86.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-t5r1t6m6-39e31.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-45-86.ec2.internal?timeout=10s - read tcp 10.0.45.86:46312->10.0.55.117:6443: read: connection reset by peer
2026-04-20T08:18:13Z node/ip-10-0-6-167.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-t5r1t6m6-39e31.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-167.ec2.internal?timeout=10s - read tcp 10.0.6.167:36108->10.0.55.117:6443: read: connection reset by peer
2026-04-20T08:26:23Z node/ip-10-0-6-167.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-t5r1t6m6-39e31.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-167.ec2.internal?timeout=10s - read tcp 10.0.6.167:42902->10.0.55.117:6443: read: connection reset by peer
2026-04-20T08:26:29Z node/ip-10-0-5-135.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-t5r1t6m6-39e31.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-135.ec2.internal?timeout=10s - read tcp 10.0.5.135:47418->10.0.55.117:6443: read: connection reset by peer
2026-04-20T08:26:33Z node/ip-10-0-45-86.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-t5r1t6m6-39e31.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-45-86.ec2.internal?timeout=10s - read tcp 10.0.45.86:51062->10.0.55.117:6443: read: connection reset by peer
#2046110927241089024junit13 hours ago
Apr 20 08:26:44.702 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project: Patch "https://api-int.ci-op-t5r1t6m6-39e31.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/net-attach-def-project?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.45.86:50780->10.0.55.117:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 20 08:26:44.702 - 28s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project: Patch "https://api-int.ci-op-t5r1t6m6-39e31.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/net-attach-def-project?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.45.86:50780->10.0.55.117:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 20 08:27:12.895 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
periodic-ci-openshift-release-main-ci-4.18-e2e-vsphere-crun-upgrade (all) - 2 runs, 0% failed, 100% of runs match
#2046119441560768512junit13 hours ago
2026-04-20T08:12:59Z node/ci-op-vpvr9ghw-993f7-7rg94-worker-0-2wtl8 - reason/FailedToUpdateLease https://api-int.ci-op-vpvr9ghw-993f7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-vpvr9ghw-993f7-7rg94-worker-0-2wtl8?timeout=10s - read tcp 10.95.160.100:43062->10.95.160.8:6443: read: connection reset by peer
2026-04-20T08:13:19Z node/ci-op-vpvr9ghw-993f7-7rg94-worker-0-2wtl8 - reason/FailedToUpdateLease https://api-int.ci-op-vpvr9ghw-993f7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-vpvr9ghw-993f7-7rg94-worker-0-2wtl8?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T08:13:29Z node/ci-op-vpvr9ghw-993f7-7rg94-worker-0-2wtl8 - reason/FailedToUpdateLease https://api-int.ci-op-vpvr9ghw-993f7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-vpvr9ghw-993f7-7rg94-worker-0-2wtl8?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T08:13:35Z node/ci-op-vpvr9ghw-993f7-7rg94-worker-0-2wtl8 - reason/FailedToUpdateLease https://api-int.ci-op-vpvr9ghw-993f7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-vpvr9ghw-993f7-7rg94-worker-0-2wtl8?timeout=10s - read tcp 10.95.160.100:37448->10.95.160.8:6443: read: connection reset by peer
2026-04-20T08:15:00Z node/ci-op-vpvr9ghw-993f7-7rg94-worker-0-2wtl8 - reason/FailedToUpdateLease https://api-int.ci-op-vpvr9ghw-993f7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-vpvr9ghw-993f7-7rg94-worker-0-2wtl8?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T08:15:10Z node/ci-op-vpvr9ghw-993f7-7rg94-worker-0-2wtl8 - reason/FailedToUpdateLease https://api-int.ci-op-vpvr9ghw-993f7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-vpvr9ghw-993f7-7rg94-worker-0-2wtl8?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T08:15:11Z node/ci-op-vpvr9ghw-993f7-7rg94-worker-0-2wtl8 - reason/FailedToUpdateLease https://api-int.ci-op-vpvr9ghw-993f7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-vpvr9ghw-993f7-7rg94-worker-0-2wtl8?timeout=10s - read tcp 10.95.160.100:60648->10.95.160.8:6443: read: connection reset by peer
2026-04-20T08:15:12Z node/ci-op-vpvr9ghw-993f7-7rg94-worker-0-2wtl8 - reason/FailedToUpdateLease https://api-int.ci-op-vpvr9ghw-993f7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-vpvr9ghw-993f7-7rg94-worker-0-2wtl8?timeout=10s - read tcp 10.95.160.100:60976->10.95.160.8:6443: read: connection reset by peer
2026-04-20T08:26:02Z node/ci-op-vpvr9ghw-993f7-7rg94-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-vpvr9ghw-993f7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-vpvr9ghw-993f7-7rg94-master-2?timeout=10s - dial tcp 10.95.160.8:6443: connect: connection refused
#2046119441560768512junit13 hours ago
2026-04-20T08:26:16Z node/ci-op-vpvr9ghw-993f7-7rg94-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-vpvr9ghw-993f7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-vpvr9ghw-993f7-7rg94-master-1?timeout=10s - net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
2026-04-20T08:31:53Z node/ci-op-vpvr9ghw-993f7-7rg94-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-vpvr9ghw-993f7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-vpvr9ghw-993f7-7rg94-master-0?timeout=10s - read tcp 10.95.160.96:48816->10.95.160.8:6443: read: connection reset by peer
2026-04-20T08:31:56Z node/ci-op-vpvr9ghw-993f7-7rg94-worker-0-2wtl8 - reason/FailedToUpdateLease https://api-int.ci-op-vpvr9ghw-993f7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-vpvr9ghw-993f7-7rg94-worker-0-2wtl8?timeout=10s - read tcp 10.95.160.100:46820->10.95.160.8:6443: read: connection reset by peer
2026-04-20T08:31:58Z node/ci-op-vpvr9ghw-993f7-7rg94-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-vpvr9ghw-993f7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-vpvr9ghw-993f7-7rg94-master-1?timeout=10s - write tcp 10.95.160.98:50484->10.95.160.8:6443: write: connection reset by peer
#2045757049756717056junit37 hours ago
2026-04-19T08:28:04Z node/ci-op-8cq78d4s-993f7-r56wv-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-8cq78d4s-993f7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-8cq78d4s-993f7-r56wv-master-2?timeout=10s - read tcp 10.93.251.98:42362->10.93.251.14:6443: read: connection reset by peer
2026-04-19T08:28:09Z node/ci-op-8cq78d4s-993f7-r56wv-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-8cq78d4s-993f7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-8cq78d4s-993f7-r56wv-master-1?timeout=10s - write tcp 10.93.251.99:45912->10.93.251.14:6443: write: connection reset by peer
2026-04-19T08:28:10Z node/ci-op-8cq78d4s-993f7-r56wv-worker-0-hxmxb - reason/FailedToUpdateLease https://api-int.ci-op-8cq78d4s-993f7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-8cq78d4s-993f7-r56wv-worker-0-hxmxb?timeout=10s - read tcp 10.93.251.102:37812->10.93.251.14:6443: read: connection reset by peer
2026-04-19T08:28:22Z node/ci-op-8cq78d4s-993f7-r56wv-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-8cq78d4s-993f7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-8cq78d4s-993f7-r56wv-master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-ci-4.15-e2e-aws-ovn-upgrade (all) - 4 runs, 50% failed, 150% of failures match = 75% impact
#2046102059018620928junit13 hours ago
2026-04-20T06:36:34Z node/ip-10-0-118-66.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-38nghbzh-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-118-66.ec2.internal?timeout=10s - read tcp 10.0.118.66:42806->10.0.98.91:6443: read: connection reset by peer
2026-04-20T06:36:40Z node/ip-10-0-40-30.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-38nghbzh-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-40-30.ec2.internal?timeout=10s - read tcp 10.0.40.30:46842->10.0.30.43:6443: read: connection reset by peer
2026-04-20T06:36:44Z node/ip-10-0-118-66.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-38nghbzh-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-118-66.ec2.internal?timeout=10s - read tcp 10.0.118.66:36056->10.0.98.91:6443: read: connection reset by peer
#2046102059018620928junit13 hours ago
Apr 20 06:32:54.225 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-controller-limited: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-controller-limited: Patch "https://api-int.ci-op-38nghbzh-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/openshift-ovn-kubernetes-controller-limited?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.3.58:46320->10.0.98.91:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 20 06:32:54.225 - 48s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-controller-limited: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-controller-limited: Patch "https://api-int.ci-op-38nghbzh-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/openshift-ovn-kubernetes-controller-limited?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.3.58:46320->10.0.98.91:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 20 06:33:42.229 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2046102059018620928junit13 hours ago
2026-04-20T06:36:34Z node/ip-10-0-118-66.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-38nghbzh-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-118-66.ec2.internal?timeout=10s - read tcp 10.0.118.66:42806->10.0.98.91:6443: read: connection reset by peer
2026-04-20T06:36:40Z node/ip-10-0-40-30.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-38nghbzh-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-40-30.ec2.internal?timeout=10s - read tcp 10.0.40.30:46842->10.0.30.43:6443: read: connection reset by peer
2026-04-20T06:36:44Z node/ip-10-0-118-66.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-38nghbzh-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-118-66.ec2.internal?timeout=10s - read tcp 10.0.118.66:36056->10.0.98.91:6443: read: connection reset by peer
#2045647936586518528junit43 hours ago
2026-04-19T00:21:18Z node/ip-10-0-6-33.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-13q4njli-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-33.ec2.internal?timeout=10s - read tcp 10.0.6.33:48774->10.0.85.239:6443: read: connection reset by peer
2026-04-19T00:21:19Z node/ip-10-0-25-129.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-13q4njli-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-129.ec2.internal?timeout=10s - read tcp 10.0.25.129:46016->10.0.45.129:6443: read: connection reset by peer
2026-04-19T00:21:20Z node/ip-10-0-0-207.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-13q4njli-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-207.ec2.internal?timeout=10s - read tcp 10.0.0.207:49820->10.0.85.239:6443: read: connection reset by peer
2026-04-19T00:21:20Z node/ip-10-0-127-183.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-13q4njli-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-183.ec2.internal?timeout=10s - read tcp 10.0.127.183:52280->10.0.85.239:6443: read: connection reset by peer
2026-04-19T00:46:08Z node/ip-10-0-6-33.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-13q4njli-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-33.ec2.internal?timeout=10s - read tcp 10.0.6.33:43856->10.0.85.239:6443: read: connection reset by peer
2026-04-19T00:46:09Z node/ip-10-0-25-129.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-13q4njli-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-129.ec2.internal?timeout=10s - read tcp 10.0.25.129:46664->10.0.85.239:6443: read: connection reset by peer
#2045588530893164544junit46 hours ago
2026-04-18T20:33:03Z node/ip-10-0-13-179.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4ijyh3r3-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-179.us-west-2.compute.internal?timeout=10s - read tcp 10.0.13.179:45832->10.0.91.3:6443: read: connection reset by peer
2026-04-18T20:36:56Z node/ip-10-0-109-182.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4ijyh3r3-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-182.us-west-2.compute.internal?timeout=10s - read tcp 10.0.109.182:60640->10.0.91.3:6443: read: connection reset by peer
2026-04-18T20:37:29Z node/ip-10-0-13-179.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4ijyh3r3-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-179.us-west-2.compute.internal?timeout=10s - read tcp 10.0.13.179:53256->10.0.91.3:6443: read: connection reset by peer
2026-04-18T20:41:01Z node/ip-10-0-109-182.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4ijyh3r3-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-182.us-west-2.compute.internal?timeout=10s - read tcp 10.0.109.182:57524->10.0.33.233:6443: read: connection reset by peer
2026-04-18T20:41:04Z node/ip-10-0-111-230.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4ijyh3r3-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-111-230.us-west-2.compute.internal?timeout=10s - read tcp 10.0.111.230:45234->10.0.91.3:6443: read: connection reset by peer
2026-04-18T20:41:07Z node/ip-10-0-104-182.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4ijyh3r3-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-104-182.us-west-2.compute.internal?timeout=10s - read tcp 10.0.104.182:42906->10.0.33.233:6443: read: connection reset by peer
2026-04-18T20:41:12Z node/ip-10-0-109-182.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4ijyh3r3-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-182.us-west-2.compute.internal?timeout=10s - read tcp 10.0.109.182:54600->10.0.91.3:6443: read: connection reset by peer
2026-04-18T20:41:34Z node/ip-10-0-13-179.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4ijyh3r3-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-179.us-west-2.compute.internal?timeout=10s - read tcp 10.0.13.179:33750->10.0.91.3:6443: read: connection reset by peer
2026-04-18T21:03:22Z node/ip-10-0-10-216.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4ijyh3r3-ba8f7.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-10-216.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-ovn-kubernetes-release-4.20-periodics-e2e-metal-ipi-ovn-dualstack-bgp-local-gw (all) - 2 runs, 50% failed, 200% of failures match = 100% impact
#2046076701842083840junit13 hours ago
2026-04-20T06:21:33Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:41804->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:21:33Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.21:39946->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:21:35Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:51876->192.168.111.5:6443: read: connection reset by peer
#2045714256376107008junit37 hours ago
2026-04-19T05:19:25Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - context deadline exceeded
2026-04-19T06:30:41Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:57770->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:30:41Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:40242->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:30:42Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:36682->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:30:43Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:33450->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-ovn-kubernetes-release-4.22-periodics-e2e-metal-ipi-ovn-dualstack-bgp (all) - 2 runs, 0% failed, 100% of runs match
#2046076710360715264junit13 hours ago
2026-04-20T06:38:43Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:41416->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:38:43Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:43164->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:38:45Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:53694->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:38:45Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:35510->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:44:07Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:47978->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:44:08Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:46716->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:44:09Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:44688->192.168.111.5:6443: read: connection reset by peer
#2045714266467602432junit37 hours ago
2026-04-19T05:48:14Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T06:01:29Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:53772->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:01:32Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:33472->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:01:34Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.21:33116->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:01:36Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:34652->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-ovn-kubernetes-release-5.0-periodics-e2e-metal-ipi-ovn-dualstack-bgp-local-gw (all) - 2 runs, 0% failed, 100% of runs match
#2046076747178315776junit13 hours ago
2026-04-20T06:23:55Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:58456->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:24:01Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:34534->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:24:03Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:45622->192.168.111.5:6443: read: connection reset by peer
#2045714276512960512junit37 hours ago
2026-04-19T06:14:35Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:54342->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:14:37Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.21:47574->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:14:39Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:46892->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:20:00Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:38126->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:20:01Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:45882->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:20:05Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:42964->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:20:06Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:50498->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.15-e2e-aws-ovn-serial (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#2046102044749598720junit13 hours ago
2026-04-20T06:32:09Z node/ip-10-0-84-63.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ch8kg6g2-b1083.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-84-63.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T07:40:08Z node/ip-10-0-104-223.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ch8kg6g2-b1083.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-104-223.us-east-2.compute.internal?timeout=10s - read tcp 10.0.104.223:58744->10.0.94.31:6443: read: connection reset by peer
2026-04-20T07:40:41Z node/ip-10-0-90-51.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ch8kg6g2-b1083.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-51.us-east-2.compute.internal?timeout=10s - read tcp 10.0.90.51:57388->10.0.94.31:6443: read: connection reset by peer
2026-04-20T07:44:40Z node/ip-10-0-84-63.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ch8kg6g2-b1083.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-84-63.us-east-2.compute.internal?timeout=10s - read tcp 10.0.84.63:38952->10.0.94.31:6443: read: connection reset by peer
2026-04-20T07:44:43Z node/ip-10-0-28-40.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ch8kg6g2-b1083.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-28-40.us-east-2.compute.internal?timeout=10s - read tcp 10.0.28.40:47514->10.0.94.31:6443: read: connection reset by peer
2026-04-20T07:48:39Z node/ip-10-0-28-40.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ch8kg6g2-b1083.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-28-40.us-east-2.compute.internal?timeout=10s - read tcp 10.0.28.40:47822->10.0.26.165:6443: read: connection reset by peer
2026-04-20T07:48:45Z node/ip-10-0-84-63.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ch8kg6g2-b1083.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-84-63.us-east-2.compute.internal?timeout=10s - read tcp 10.0.84.63:44108->10.0.26.165:6443: read: connection reset by peer
2026-04-20T07:52:33Z node/ip-10-0-104-223.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ch8kg6g2-b1083.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-104-223.us-east-2.compute.internal?timeout=10s - read tcp 10.0.104.223:55084->10.0.94.31:6443: read: connection reset by peer
2026-04-20T07:52:34Z node/ip-10-0-28-40.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ch8kg6g2-b1083.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-28-40.us-east-2.compute.internal?timeout=10s - read tcp 10.0.28.40:59172->10.0.94.31:6443: read: connection reset by peer
periodic-ci-openshift-machine-config-operator-release-4.22-periodics-e2e-vsphere-mco-tp-longduration (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046011226357501952junit13 hours ago
2026-04-20T02:43:37Z node/ci-op-l0gf618i-d0937-j2n88-worker-0-xgjmq - reason/FailedToUpdateLease https://api-int.ci-op-l0gf618i-d0937.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-l0gf618i-d0937-j2n88-worker-0-xgjmq?timeout=10s - read tcp 10.93.152.115:48904->10.93.152.14:6443: read: connection reset by peer
2026-04-20T02:45:34Z node/ci-op-l0gf618i-d0937-j2n88-worker-0-tqsmr - reason/FailedToUpdateLease https://api-int.ci-op-l0gf618i-d0937.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-l0gf618i-d0937-j2n88-worker-0-tqsmr?timeout=10s - read tcp 10.93.152.114:48680->10.93.152.14:6443: read: connection reset by peer
2026-04-20T02:45:38Z node/ci-op-l0gf618i-d0937-j2n88-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-l0gf618i-d0937.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-l0gf618i-d0937-j2n88-master-2?timeout=10s - read tcp 10.93.152.108:54332->10.93.152.14:6443: read: connection reset by peer
periodic-ci-openshift-ovn-kubernetes-release-4.21-periodics-e2e-metal-ipi-ovn-dualstack-bgp-local-gw (all) - 2 runs, 0% failed, 100% of runs match
#2046076707718303744junit13 hours ago
2026-04-20T06:05:48Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:39580->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:05:49Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.21:41188->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:05:53Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:36520->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:05:53Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:55748->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:05:54Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:54424->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:11:16Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:41460->192.168.111.5:6443: read: connection reset by peer
#2045714264102014976junit37 hours ago
2026-04-19T06:14:02Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.21:42204->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:14:05Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:55740->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:14:07Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:47688->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:14:08Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:58970->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:14:10Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:51992->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-ovn-kubernetes-release-4.23-periodics-e2e-metal-ipi-ovn-dualstack-bgp (all) - 2 runs, 0% failed, 100% of runs match
#2046076722096377856junit13 hours ago
2026-04-20T06:15:12Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - write tcp 192.168.111.20:59854->192.168.111.5:6443: write: connection reset by peer
2026-04-20T06:15:17Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.21:33776->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:15:17Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:41296->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:15:19Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:52980->192.168.111.5:6443: read: connection reset by peer
#2045714269793685504junit37 hours ago
2026-04-19T06:16:31Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:42054->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:16:35Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:53720->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:16:38Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:45666->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.15-e2e-aws-sdn-upgrade (all) - 2 runs, 0% failed, 100% of runs match
#2046102066576756736junit13 hours ago
2026-04-20T06:37:50Z node/ip-10-0-99-185.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-my851vxt-a604e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-99-185.us-east-2.compute.internal?timeout=10s - read tcp 10.0.99.185:47350->10.0.41.61:6443: read: connection reset by peer
2026-04-20T07:21:40Z node/ip-10-0-4-171.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-my851vxt-a604e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-4-171.us-east-2.compute.internal?timeout=10s - read tcp 10.0.4.171:55610->10.0.105.237:6443: read: connection reset by peer
#2046102066576756736junit13 hours ago
Apr 20 06:37:54.264 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io: failed to apply / update (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io: Patch "https://api-int.ci-op-my851vxt-a604e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/network-node-identity.openshift.io?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.23.30:54732->10.0.105.237:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 20 06:37:54.264 - 42s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io: failed to apply / update (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io: Patch "https://api-int.ci-op-my851vxt-a604e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/network-node-identity.openshift.io?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.23.30:54732->10.0.105.237:6443: read: connection reset by peer (exception: We are not worried about Degraded=True blips for update tests yet.)
Apr 20 06:38:37.052 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2046102066576756736junit13 hours ago
2026-04-20T06:37:50Z node/ip-10-0-99-185.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-my851vxt-a604e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-99-185.us-east-2.compute.internal?timeout=10s - read tcp 10.0.99.185:47350->10.0.41.61:6443: read: connection reset by peer
2026-04-20T07:21:40Z node/ip-10-0-4-171.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-my851vxt-a604e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-4-171.us-east-2.compute.internal?timeout=10s - read tcp 10.0.4.171:55610->10.0.105.237:6443: read: connection reset by peer
#2045572234197602304junit2 days ago
2026-04-18T19:20:43Z node/ip-10-0-28-36.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zd9qpqg1-a604e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-28-36.ec2.internal?timeout=10s - read tcp 10.0.28.36:54884->10.0.39.251:6443: read: connection reset by peer
2026-04-18T19:20:44Z node/ip-10-0-8-167.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zd9qpqg1-a604e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-167.ec2.internal?timeout=10s - read tcp 10.0.8.167:40050->10.0.39.251:6443: read: connection reset by peer
2026-04-18T19:20:54Z node/ip-10-0-39-61.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zd9qpqg1-a604e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-61.ec2.internal?timeout=10s - read tcp 10.0.39.61:59256->10.0.93.177:6443: read: connection reset by peer
2026-04-18T20:08:12Z node/ip-10-0-8-167.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zd9qpqg1-a604e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-167.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045572234197602304junit2 days ago
2026-04-18T19:20:43Z node/ip-10-0-28-36.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zd9qpqg1-a604e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-28-36.ec2.internal?timeout=10s - read tcp 10.0.28.36:54884->10.0.39.251:6443: read: connection reset by peer
2026-04-18T19:20:44Z node/ip-10-0-8-167.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zd9qpqg1-a604e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-167.ec2.internal?timeout=10s - read tcp 10.0.8.167:40050->10.0.39.251:6443: read: connection reset by peer
2026-04-18T19:20:54Z node/ip-10-0-39-61.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zd9qpqg1-a604e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-61.ec2.internal?timeout=10s - read tcp 10.0.39.61:59256->10.0.93.177:6443: read: connection reset by peer
2026-04-18T20:08:12Z node/ip-10-0-8-167.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-zd9qpqg1-a604e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-8-167.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-ovn-kubernetes-release-4.21-periodics-e2e-metal-ipi-ovn-dualstack-bgp (all) - 2 runs, 0% failed, 100% of runs match
#2046076705197527040junit13 hours ago
2026-04-20T05:20:38Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - context deadline exceeded
2026-04-20T06:18:06Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:49832->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:18:07Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.21:55380->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:18:07Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:60134->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:18:08Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:42278->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:23:28Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:33762->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:23:33Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:37438->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:23:33Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:42396->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:23:35Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:36218->192.168.111.5:6443: read: connection reset by peer
#2045714261409271808junit37 hours ago
2026-04-19T05:45:38Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:49988->192.168.111.5:6443: read: connection reset by peer
2026-04-19T05:45:41Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:46188->192.168.111.5:6443: read: connection reset by peer
2026-04-19T05:45:42Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.21:44238->192.168.111.5:6443: read: connection reset by peer
2026-04-19T05:45:44Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:39016->192.168.111.5:6443: read: connection reset by peer
2026-04-19T05:51:13Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:60064->192.168.111.5:6443: read: connection reset by peer
2026-04-19T05:51:17Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:55590->192.168.111.5:6443: read: connection reset by peer
2026-04-19T05:51:18Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:56750->192.168.111.5:6443: read: connection reset by peer
2026-04-19T05:51:18Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:57802->192.168.111.5:6443: read: connection reset by peer
2026-04-19T05:51:18Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:36640->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-ovn-kubernetes-release-4.20-periodics-e2e-metal-ipi-ovn-dualstack-bgp (all) - 2 runs, 50% failed, 200% of failures match = 100% impact
#2046076699325501440junit13 hours ago
2026-04-20T06:02:11Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:39012->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:02:11Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:55612->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:02:13Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:55324->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:02:13Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:52838->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:02:17Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.21:44644->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:07:37Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:51066->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:07:37Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:53390->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:07:38Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:46004->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:07:39Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:49334->192.168.111.5:6443: read: connection reset by peer
2026-04-20T06:07:44Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.21:55286->192.168.111.5:6443: read: connection reset by peer
2026-04-20T07:07:27Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - context deadline exceeded
#2045714254698385408junit37 hours ago
2026-04-19T05:58:51Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:33698->192.168.111.5:6443: read: connection reset by peer
2026-04-19T05:58:51Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:47848->192.168.111.5:6443: read: connection reset by peer
2026-04-19T05:58:51Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:58880->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:04:21Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:36560->192.168.111.5:6443: read: connection reset by peer
2026-04-19T06:04:21Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:40276->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-ovn-kubernetes-release-5.0-periodics-e2e-metal-ipi-ovn-dualstack-bgp (all) - 2 runs, 50% failed, 200% of failures match = 100% impact
#2046076743810289664junit14 hours ago
2026-04-20T05:47:18Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:33292->192.168.111.5:6443: read: connection reset by peer
2026-04-20T05:47:20Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:45986->192.168.111.5:6443: read: connection reset by peer
2026-04-20T05:47:21Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:60770->192.168.111.5:6443: read: connection reset by peer
2026-04-20T05:47:24Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.21:43462->192.168.111.5:6443: read: connection reset by peer
2026-04-20T05:52:34Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:45594->192.168.111.5:6443: read: connection reset by peer
2026-04-20T05:52:36Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:52022->192.168.111.5:6443: read: connection reset by peer
2026-04-20T05:52:38Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:37080->192.168.111.5:6443: read: connection reset by peer
2026-04-20T05:52:39Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:45708->192.168.111.5:6443: read: connection reset by peer
2026-04-20T05:52:42Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:39022->192.168.111.5:6443: read: connection reset by peer
#2045714275674099712junit38 hours ago
2026-04-19T05:25:24Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - context deadline exceeded
2026-04-19T05:37:41Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:57650->192.168.111.5:6443: read: connection reset by peer
2026-04-19T05:37:41Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - dial tcp 192.168.111.5:6443: connect: connection refused
periodic-ci-openshift-release-main-ci-4.19-e2e-aws-runc-upgrade (all) - 2 runs, 0% failed, 100% of runs match
#2046073135962263552junit14 hours ago
2026-04-20T04:48:01Z node/ip-10-0-59-16.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-npbm06xd-7a2dc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-59-16.us-east-2.compute.internal?timeout=10s - read tcp 10.0.59.16:33492->10.0.1.3:6443: read: connection reset by peer
2026-04-20T04:52:13Z node/ip-10-0-25-147.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-npbm06xd-7a2dc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-147.us-east-2.compute.internal?timeout=10s - read tcp 10.0.25.147:49660->10.0.1.3:6443: read: connection reset by peer
2026-04-20T04:55:49Z node/ip-10-0-16-52.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-npbm06xd-7a2dc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-52.us-east-2.compute.internal?timeout=10s - read tcp 10.0.16.52:35144->10.0.125.17:6443: read: connection reset by peer
2026-04-20T05:46:07Z node/ip-10-0-59-16.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-npbm06xd-7a2dc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-59-16.us-east-2.compute.internal?timeout=10s - read tcp 10.0.59.16:51474->10.0.1.3:6443: read: connection reset by peer
2026-04-20T05:46:07Z node/ip-10-0-59-16.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-npbm06xd-7a2dc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-59-16.us-east-2.compute.internal?timeout=10s - read tcp 10.0.59.16:59976->10.0.125.17:6443: read: connection reset by peer
#2046073135962263552junit14 hours ago
2026-04-20T04:48:01Z node/ip-10-0-59-16.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-npbm06xd-7a2dc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-59-16.us-east-2.compute.internal?timeout=10s - read tcp 10.0.59.16:33492->10.0.1.3:6443: read: connection reset by peer
2026-04-20T04:52:13Z node/ip-10-0-25-147.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-npbm06xd-7a2dc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-147.us-east-2.compute.internal?timeout=10s - read tcp 10.0.25.147:49660->10.0.1.3:6443: read: connection reset by peer
2026-04-20T04:55:49Z node/ip-10-0-16-52.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-npbm06xd-7a2dc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-52.us-east-2.compute.internal?timeout=10s - read tcp 10.0.16.52:35144->10.0.125.17:6443: read: connection reset by peer
2026-04-20T05:46:07Z node/ip-10-0-59-16.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-npbm06xd-7a2dc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-59-16.us-east-2.compute.internal?timeout=10s - read tcp 10.0.59.16:51474->10.0.1.3:6443: read: connection reset by peer
2026-04-20T05:46:07Z node/ip-10-0-59-16.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-npbm06xd-7a2dc.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-59-16.us-east-2.compute.internal?timeout=10s - read tcp 10.0.59.16:59976->10.0.125.17:6443: read: connection reset by peer
#2045710744011411456junit38 hours ago
2026-04-19T04:49:31Z node/ip-10-0-51-6.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bkr36shn-7a2dc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-6.ec2.internal?timeout=10s - read tcp 10.0.51.6:41264->10.0.16.128:6443: read: connection reset by peer
2026-04-19T04:49:33Z node/ip-10-0-73-85.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bkr36shn-7a2dc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-73-85.ec2.internal?timeout=10s - read tcp 10.0.73.85:39786->10.0.16.128:6443: read: connection reset by peer
2026-04-19T04:49:39Z node/ip-10-0-110-142.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bkr36shn-7a2dc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-110-142.ec2.internal?timeout=10s - read tcp 10.0.110.142:43078->10.0.16.128:6443: read: connection reset by peer
2026-04-19T04:53:33Z node/ip-10-0-110-142.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bkr36shn-7a2dc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-110-142.ec2.internal?timeout=10s - read tcp 10.0.110.142:42958->10.0.16.128:6443: read: connection reset by peer
2026-04-19T04:54:00Z node/ip-10-0-70-80.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bkr36shn-7a2dc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-70-80.ec2.internal?timeout=10s - read tcp 10.0.70.80:39520->10.0.16.128:6443: read: connection reset by peer
2026-04-19T04:57:28Z node/ip-10-0-110-142.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bkr36shn-7a2dc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-110-142.ec2.internal?timeout=10s - read tcp 10.0.110.142:42284->10.0.107.85:6443: read: connection reset by peer
2026-04-19T04:57:32Z node/ip-10-0-51-6.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bkr36shn-7a2dc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-6.ec2.internal?timeout=10s - read tcp 10.0.51.6:58338->10.0.107.85:6443: read: connection reset by peer
2026-04-19T05:48:44Z node/ip-10-0-40-93.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bkr36shn-7a2dc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-40-93.ec2.internal?timeout=10s - read tcp 10.0.40.93:53456->10.0.16.128:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.12-e2e-aws-sdn-serial (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#2046090637802999808junit14 hours ago
Apr 20 06:21:11.115 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-185-202.ec2.internal node/ip-10-0-185-202.ec2.internal uid/b80feefe-3a69-4ca7-bdd2-a0cf7bf2e1fb container/kube-apiserver-check-endpoints reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Apr 20 06:31:51.690 E clusteroperator/network condition/Degraded status/True reason/ApplyOperatorConfig changed: Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts: Patch "https://api-int.ci-op-2tvlclm7-2214a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-multus/rolebindings/multus-whereabouts?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.185.202:57376->10.0.129.186:6443: read: connection reset by peer
Apr 20 06:31:51.690 - 35s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts: Patch "https://api-int.ci-op-2tvlclm7-2214a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-multus/rolebindings/multus-whereabouts?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.185.202:57376->10.0.129.186:6443: read: connection reset by peer
Apr 20 06:39:38.056 E clusteroperator/network condition/Degraded status/True reason/ApplyOperatorConfig changed: Error while updating operator configuration: could not apply (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io: failed to apply / update (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io: Patch "https://api-int.ci-op-2tvlclm7-2214a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/network-attachment-definitions.k8s.cni.cncf.io?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.185.202:41384->10.0.198.164:6443: read: connection reset by peer
Apr 20 06:39:38.056 - 34s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io: failed to apply / update (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io: Patch "https://api-int.ci-op-2tvlclm7-2214a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/network-attachment-definitions.k8s.cni.cncf.io?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.185.202:41384->10.0.198.164:6443: read: connection reset by peer
Apr 20 06:42:49.771 E ns/e2e-test-oc-status-r4f2q-project-status pod/pi-w9thn node/ip-10-0-170-241.ec2.internal uid/e336a66f-fefb-4cc4-b8a4-bbd73c62b6de container/pi reason/ContainerExit code/2 cause/Error
#2046090637802999808junit14 hours ago
Apr 20 06:31:51.690 - 35s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts: Patch "https://api-int.ci-op-2tvlclm7-2214a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-multus/rolebindings/multus-whereabouts?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.185.202:57376->10.0.129.186:6443: read: connection reset by peer
Apr 20 06:39:38.056 - 34s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io: failed to apply / update (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io: Patch "https://api-int.ci-op-2tvlclm7-2214a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/network-attachment-definitions.k8s.cni.cncf.io?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.185.202:41384->10.0.198.164:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.21-e2e-vsphere-ovn-upgrade (all) - 6 runs, 33% failed, 200% of failures match = 67% impact
#2046097363923111936junit14 hours ago
2026-04-20T06:51:32Z node/ci-op-1qb67kkn-e37cb-vtjbx-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-1qb67kkn-e37cb.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-1qb67kkn-e37cb-vtjbx-master-1?timeout=10s - read tcp 10.93.157.108:52064->10.93.157.14:6443: read: connection reset by peer
2026-04-20T06:51:33Z node/ci-op-1qb67kkn-e37cb-vtjbx-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-1qb67kkn-e37cb.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-1qb67kkn-e37cb-vtjbx-master-0?timeout=10s - read tcp 10.93.157.109:54448->10.93.157.14:6443: read: connection reset by peer
#2045873532650393600junit29 hours ago
2026-04-19T16:21:26Z node/ci-op-8cdc3rqt-e37cb-wg4mv-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-8cdc3rqt-e37cb.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-8cdc3rqt-e37cb-wg4mv-master-2?timeout=10s - read tcp 10.93.157.96:59494->10.93.157.16:6443: read: connection reset by peer
2026-04-19T16:21:31Z node/ci-op-8cdc3rqt-e37cb-wg4mv-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-8cdc3rqt-e37cb.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-8cdc3rqt-e37cb-wg4mv-master-1?timeout=10s - read tcp 10.93.157.94:59680->10.93.157.16:6443: read: connection reset by peer
2026-04-19T16:21:34Z node/ci-op-8cdc3rqt-e37cb-wg4mv-worker-0-f94wj - reason/FailedToUpdateLease https://api-int.ci-op-8cdc3rqt-e37cb.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-8cdc3rqt-e37cb-wg4mv-worker-0-f94wj?timeout=10s - read tcp 10.93.157.99:33680->10.93.157.16:6443: read: connection reset by peer
#2045715772747026432junit40 hours ago
2026-04-19T05:37:59Z node/ci-op-87mqpd64-e37cb-qjvcx-worker-0-tz2xj - reason/FailedToUpdateLease https://api-int.ci-op-87mqpd64-e37cb.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-87mqpd64-e37cb-qjvcx-worker-0-tz2xj?timeout=10s - read tcp 10.93.152.88:50886->10.93.152.10:6443: read: connection reset by peer
2026-04-19T05:38:02Z node/ci-op-87mqpd64-e37cb-qjvcx-worker-0-9dsh7 - reason/FailedToUpdateLease https://api-int.ci-op-87mqpd64-e37cb.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-87mqpd64-e37cb-qjvcx-worker-0-9dsh7?timeout=10s - read tcp 10.93.152.91:38160->10.93.152.10:6443: read: connection reset by peer
#2045599419587366912junit47 hours ago
2026-04-18T21:46:06Z node/ci-op-9155sh9d-e37cb-wzk89-worker-0-47mhq - reason/FailedToUpdateLease https://api-int.ci-op-9155sh9d-e37cb.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-9155sh9d-e37cb-wzk89-worker-0-47mhq?timeout=10s - read tcp 10.95.160.52:49666->10.95.160.8:6443: read: connection reset by peer
2026-04-18T21:51:57Z node/ci-op-9155sh9d-e37cb-wzk89-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-9155sh9d-e37cb.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-9155sh9d-e37cb-wzk89-master-2?timeout=10s - read tcp 10.95.160.34:52612->10.95.160.8:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-5.0-e2e-metal-ovn-two-node-arbiter-workload-partitioning (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046046376579567616junit15 hours ago
2026-04-20T04:25:19Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - read tcp 192.168.111.22:51714->192.168.111.5:6443: read: connection reset by peer
2026-04-20T04:25:20Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:42196->192.168.111.5:6443: read: connection reset by peer
2026-04-20T04:30:59Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:60164->192.168.111.5:6443: read: connection reset by peer
2026-04-20T04:31:07Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - read tcp 192.168.111.22:60748->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-arbiter-upgrade (all) - 2 runs, 0% failed, 100% of runs match
#2046073136251670528junit15 hours ago
2026-04-20T05:09:27Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - EOF
2026-04-20T06:26:02Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:41970->192.168.111.5:6443: read: connection reset by peer
#2045710744292429824junit39 hours ago
2026-04-19T06:41:20Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - read tcp 192.168.111.22:50598->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.20-upgrade-from-stable-4.19-e2e-vsphere-upgrade-vcf9 (all) - 2 runs, 0% failed, 50% of runs match
#2046085723035013120junit15 hours ago
2026-04-20T06:19:30Z node/ci-op-mijirmrj-06f5e-g92zf-worker-0-hxvqz - reason/FailedToUpdateLease https://api-int.ci-op-mijirmrj-06f5e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-mijirmrj-06f5e-g92zf-worker-0-hxvqz?timeout=10s - read tcp 10.93.157.105:39672->10.93.157.10:6443: read: connection reset by peer
2026-04-20T06:19:30Z node/ci-op-mijirmrj-06f5e-g92zf-worker-0-mv4fr - reason/FailedToUpdateLease https://api-int.ci-op-mijirmrj-06f5e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-mijirmrj-06f5e-g92zf-worker-0-mv4fr?timeout=10s - read tcp 10.93.157.106:53752->10.93.157.10:6443: read: connection reset by peer
2026-04-20T06:19:33Z node/ci-op-mijirmrj-06f5e-g92zf-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-mijirmrj-06f5e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-mijirmrj-06f5e-g92zf-master-2?timeout=10s - write tcp 10.93.157.102:55026->10.93.157.10:6443: write: connection reset by peer
2026-04-20T06:19:37Z node/ci-op-mijirmrj-06f5e-g92zf-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-mijirmrj-06f5e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-mijirmrj-06f5e-g92zf-master-0?timeout=10s - read tcp 10.93.157.101:49464->10.93.157.10:6443: read: connection reset by peer
2026-04-20T06:19:46Z node/ci-op-mijirmrj-06f5e-g92zf-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-mijirmrj-06f5e.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-mijirmrj-06f5e-g92zf-master-1?timeout=10s - context deadline exceeded
periodic-ci-openshift-release-main-nightly-4.20-e2e-aws-serial-runc (all) - 2 runs, 50% failed, 200% of failures match = 100% impact
#2046057784671211520junit15 hours ago
2026-04-20T04:52:10Z node/ip-10-0-49-62.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1c9f7iff-1bf75.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-49-62.us-east-2.compute.internal?timeout=10s - read tcp 10.0.49.62:37274->10.0.110.183:6443: read: connection reset by peer
2026-04-20T04:52:16Z node/ip-10-0-43-15.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1c9f7iff-1bf75.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-43-15.us-east-2.compute.internal?timeout=10s - read tcp 10.0.43.15:58658->10.0.110.183:6443: read: connection reset by peer
2026-04-20T04:56:01Z node/ip-10-0-43-15.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1c9f7iff-1bf75.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-43-15.us-east-2.compute.internal?timeout=10s - read tcp 10.0.43.15:49426->10.0.110.183:6443: read: connection reset by peer
2026-04-20T05:00:52Z node/ip-10-0-49-62.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1c9f7iff-1bf75.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-49-62.us-east-2.compute.internal?timeout=10s - read tcp 10.0.49.62:39248->10.0.49.221:6443: read: connection reset by peer
2026-04-20T05:00:56Z node/ip-10-0-126-104.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1c9f7iff-1bf75.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-126-104.us-east-2.compute.internal?timeout=10s - read tcp 10.0.126.104:47308->10.0.49.221:6443: read: connection reset by peer
2026-04-20T05:04:51Z node/ip-10-0-43-15.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1c9f7iff-1bf75.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-43-15.us-east-2.compute.internal?timeout=10s - read tcp 10.0.43.15:45418->10.0.49.221:6443: read: connection reset by peer
2026-04-20T05:08:42Z node/ip-10-0-49-62.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1c9f7iff-1bf75.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-49-62.us-east-2.compute.internal?timeout=10s - read tcp 10.0.49.62:39748->10.0.49.221:6443: read: connection reset by peer
2026-04-20T05:09:10Z node/ip-10-0-0-246.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1c9f7iff-1bf75.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-0-246.us-east-2.compute.internal?timeout=10s - read tcp 10.0.0.246:57202->10.0.110.183:6443: read: connection reset by peer
2026-04-20T05:09:17Z node/ip-10-0-43-15.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1c9f7iff-1bf75.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-43-15.us-east-2.compute.internal?timeout=10s - read tcp 10.0.43.15:33636->10.0.49.221:6443: read: connection reset by peer
#2045695393039126528junit39 hours ago
2026-04-19T04:59:58Z node/ip-10-0-107-13.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-z0zxxk2w-1bf75.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-107-13.ec2.internal?timeout=10s - read tcp 10.0.107.13:42418->10.0.75.107:6443: read: connection reset by peer
2026-04-19T05:08:52Z node/ip-10-0-36-105.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-z0zxxk2w-1bf75.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-105.ec2.internal?timeout=10s - read tcp 10.0.36.105:58392->10.0.17.40:6443: read: connection reset by peer
2026-04-19T05:08:54Z node/ip-10-0-54-128.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-z0zxxk2w-1bf75.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-54-128.ec2.internal?timeout=10s - read tcp 10.0.54.128:39652->10.0.75.107:6443: read: connection reset by peer
2026-04-19T05:16:51Z node/ip-10-0-36-105.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-z0zxxk2w-1bf75.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-105.ec2.internal?timeout=10s - read tcp 10.0.36.105:49950->10.0.75.107:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-serial-virtualmedia-2of2 (all) - 7 runs, 29% failed, 50% of failures match = 14% impact
#2046045004270407680junit15 hours ago
2026-04-20T04:12:57Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T04:12:59Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:52258->192.168.111.5:6443: read: connection reset by peer
2026-04-20T04:12:59Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:47742->192.168.111.5:6443: read: connection reset by peer
2026-04-20T04:13:00Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T04:13:02Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - context deadline exceeded
2026-04-20T04:13:02Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:57906->192.168.111.5:6443: read: connection reset by peer
2026-04-20T04:13:08Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-arbiter-workload-partitioning (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046046367352098816junit15 hours ago
2026-04-20T04:00:04Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:41874->192.168.111.5:6443: read: connection reset by peer
2026-04-20T04:00:05Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - read tcp 192.168.111.22:47590->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.21-e2e-metal-ovn-two-node-arbiter-workload-partitioning (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046046362314739712junit16 hours ago
2026-04-20T03:36:27Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - read tcp 192.168.111.22:36142->192.168.111.5:6443: read: connection reset by peer
2026-04-20T03:36:30Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:53262->192.168.111.5:6443: read: connection reset by peer
2026-04-20T03:54:40Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-ci-4.18-e2e-aws-cgroupsv1-upgrade (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#2046053256295092224junit16 hours ago
2026-04-20T03:29:08Z node/ip-10-0-63-146.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3bq4bcsw-0d4ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-63-146.us-west-1.compute.internal?timeout=10s - read tcp 10.0.63.146:53916->10.0.33.25:6443: read: connection reset by peer
2026-04-20T03:29:18Z node/ip-10-0-63-146.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3bq4bcsw-0d4ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-63-146.us-west-1.compute.internal?timeout=10s - read tcp 10.0.63.146:55384->10.0.91.202:6443: read: connection reset by peer
2026-04-20T03:33:15Z node/ip-10-0-90-102.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3bq4bcsw-0d4ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-102.us-west-1.compute.internal?timeout=10s - read tcp 10.0.90.102:47402->10.0.33.25:6443: read: connection reset by peer
2026-04-20T03:33:15Z node/ip-10-0-11-105.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3bq4bcsw-0d4ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-105.us-west-1.compute.internal?timeout=10s - read tcp 10.0.11.105:58342->10.0.91.202:6443: read: connection reset by peer
2026-04-20T04:20:48Z node/ip-10-0-46-117.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3bq4bcsw-0d4ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-46-117.us-west-1.compute.internal?timeout=10s - read tcp 10.0.46.117:39702->10.0.33.25:6443: read: connection reset by peer
2026-04-20T04:20:50Z node/ip-10-0-34-252.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3bq4bcsw-0d4ef.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-34-252.us-west-1.compute.internal?timeout=10s - read tcp 10.0.34.252:60734->10.0.33.25:6443: read: connection reset by peer
periodic-ci-openshift-cluster-storage-operator-release-4.21-periodics-periodic-e2e-aws-ovn-upgrade-check-dev-symlinks (all) - 2 runs, 0% failed, 100% of runs match
#2046062565523460096junit16 hours ago
2026-04-20T03:58:02Z node/ip-10-0-50-174.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-w2tr5qcv-85f8c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-50-174.ec2.internal?timeout=10s - read tcp 10.0.50.174:56080->10.0.4.230:6443: read: connection reset by peer
2026-04-20T03:58:07Z node/ip-10-0-30-152.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-w2tr5qcv-85f8c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-152.ec2.internal?timeout=10s - read tcp 10.0.30.152:41158->10.0.4.230:6443: read: connection reset by peer
2026-04-20T04:20:13Z node/ip-10-0-30-152.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-w2tr5qcv-85f8c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-152.ec2.internal?timeout=10s - read tcp 10.0.30.152:52862->10.0.4.230:6443: read: connection reset by peer
2026-04-20T04:23:57Z node/ip-10-0-83-0.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-w2tr5qcv-85f8c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-83-0.ec2.internal?timeout=10s - read tcp 10.0.83.0:33098->10.0.71.7:6443: read: connection reset by peer
2026-04-20T04:24:04Z node/ip-10-0-64-166.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-w2tr5qcv-85f8c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-64-166.ec2.internal?timeout=10s - read tcp 10.0.64.166:36974->10.0.71.7:6443: read: connection reset by peer
2026-04-20T05:07:23Z node/ip-10-0-83-0.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-w2tr5qcv-85f8c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-83-0.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045700174029787136junit40 hours ago
2026-04-19T04:05:02Z node/ip-10-0-20-66.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-w0mc41i9-85f8c.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-20-66.ec2.internal?timeout=10s - read tcp 10.0.20.66:46630->10.0.63.88:6443: read: connection reset by peer
2026-04-19T04:05:04Z node/ip-10-0-72-0.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-w0mc41i9-85f8c.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-72-0.ec2.internal?timeout=10s - read tcp 10.0.72.0:35310->10.0.63.88:6443: read: connection reset by peer
2026-04-19T04:09:07Z node/ip-10-0-39-24.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-w0mc41i9-85f8c.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-24.ec2.internal?timeout=10s - read tcp 10.0.39.24:57052->10.0.63.88:6443: read: connection reset by peer
2026-04-19T04:13:01Z node/ip-10-0-20-66.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-w0mc41i9-85f8c.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-20-66.ec2.internal?timeout=10s - read tcp 10.0.20.66:35786->10.0.63.88:6443: read: connection reset by peer
2026-04-19T04:55:12Z node/ip-10-0-72-0.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-w0mc41i9-85f8c.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-72-0.ec2.internal?timeout=10s - context deadline exceeded
#2045700174029787136junit40 hours ago
Apr 19 04:09:01.906 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity: failed to apply / update (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity: Patch "https://api-int.ci-op-w0mc41i9-85f8c.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/network.operator.openshift.io/v1/namespaces/openshift-network-node-identity/operatorpkis/network-node-identity?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.100.168:40032->10.0.118.99:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 19 04:09:01.906 - 27s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity: failed to apply / update (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity: Patch "https://api-int.ci-op-w0mc41i9-85f8c.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/network.operator.openshift.io/v1/namespaces/openshift-network-node-identity/operatorpkis/network-node-identity?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.100.168:40032->10.0.118.99:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 19 04:09:29.866 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
periodic-ci-openshift-release-main-nightly-4.20-e2e-metal-ovn-two-node-arbiter-workload-partitioning (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046046358112047104junit16 hours ago
2026-04-20T03:25:49Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - write tcp 192.168.111.20:32830->192.168.111.5:6443: write: connection reset by peer
2026-04-20T03:25:50Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - read tcp 192.168.111.22:33650->192.168.111.5:6443: read: connection reset by peer
2026-04-20T03:32:50Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:58894->192.168.111.5:6443: read: connection reset by peer
2026-04-20T03:32:51Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - read tcp 192.168.111.22:41114->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.21-e2e-metal-ovn-two-node-arbiter-upgrade (all) - 2 runs, 0% failed, 50% of runs match
#2046057784721543168junit16 hours ago
2026-04-20T04:53:55Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - read tcp 192.168.111.22:52986->192.168.111.5:6443: read: connection reset by peer
2026-04-20T05:00:19Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:35838->192.168.111.5:6443: read: connection reset by peer
2026-04-20T05:00:23Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - read tcp 192.168.111.22:47102->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-multiarch-main-nightly-4.20-ocp-e2e-ovn-serial-aws-multi-a-a-2of2 (all) - 6 runs, 33% failed, 250% of failures match = 83% impact
#2046041618594664448junit16 hours ago
Apr 20 03:54:43.572 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics: failed to apply / update (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics: Patch "https://api-int.ci-op-42fkmdsh-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-network-diagnostics/serviceaccounts/network-diagnostics?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.81.239:60240->10.0.66.225:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 20 03:54:43.572 - 28s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics: failed to apply / update (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics: Patch "https://api-int.ci-op-42fkmdsh-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-network-diagnostics/serviceaccounts/network-diagnostics?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.81.239:60240->10.0.66.225:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 20 03:55:12.541 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2046041618594664448junit16 hours ago
2026-04-20T03:50:40Z node/ip-10-0-29-123.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-42fkmdsh-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-29-123.us-west-2.compute.internal?timeout=10s - read tcp 10.0.29.123:42240->10.0.30.46:6443: read: connection reset by peer
2026-04-20T03:59:21Z node/ip-10-0-29-123.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-42fkmdsh-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-29-123.us-west-2.compute.internal?timeout=10s - read tcp 10.0.29.123:35182->10.0.30.46:6443: read: connection reset by peer
2026-04-20T03:59:22Z node/ip-10-0-48-87.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-42fkmdsh-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-48-87.us-west-2.compute.internal?timeout=10s - read tcp 10.0.48.87:37446->10.0.66.225:6443: read: connection reset by peer
2026-04-20T03:59:23Z node/ip-10-0-120-236.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-42fkmdsh-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-236.us-west-2.compute.internal?timeout=10s - read tcp 10.0.120.236:38736->10.0.30.46:6443: read: connection reset by peer
2026-04-20T03:59:28Z node/ip-10-0-66-238.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-42fkmdsh-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-66-238.us-west-2.compute.internal?timeout=10s - read tcp 10.0.66.238:35296->10.0.66.225:6443: read: connection reset by peer
2026-04-20T04:34:15Z node/ip-10-0-120-236.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-42fkmdsh-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-236.us-west-2.compute.internal?timeout=10s - context deadline exceeded
#2045956874884354048junit22 hours ago
2026-04-19T21:08:04Z node/ip-10-0-20-136.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gpsw0sr9-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-20-136.ec2.internal?timeout=10s - read tcp 10.0.20.136:37938->10.0.102.111:6443: read: connection reset by peer
2026-04-19T21:08:08Z node/ip-10-0-120-255.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gpsw0sr9-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-255.ec2.internal?timeout=10s - read tcp 10.0.120.255:54222->10.0.102.111:6443: read: connection reset by peer
2026-04-19T21:08:21Z node/ip-10-0-71-199.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gpsw0sr9-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-199.ec2.internal?timeout=10s - read tcp 10.0.71.199:44506->10.0.102.111:6443: read: connection reset by peer
2026-04-19T22:17:44Z node/ip-10-0-120-255.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gpsw0sr9-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-255.ec2.internal?timeout=10s - read tcp 10.0.120.255:40600->10.0.102.111:6443: read: connection reset by peer
2026-04-19T22:21:59Z node/ip-10-0-71-199.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gpsw0sr9-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-199.ec2.internal?timeout=10s - read tcp 10.0.71.199:47956->10.0.27.75:6443: read: connection reset by peer
2026-04-19T22:26:46Z node/ip-10-0-102-221.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gpsw0sr9-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-102-221.ec2.internal?timeout=10s - read tcp 10.0.102.221:40808->10.0.102.111:6443: read: connection reset by peer
2026-04-19T22:27:02Z node/ip-10-0-105-254.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gpsw0sr9-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-105-254.ec2.internal?timeout=10s - read tcp 10.0.105.254:57078->10.0.102.111:6443: read: connection reset by peer
2026-04-19T22:30:40Z node/ip-10-0-120-255.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gpsw0sr9-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-255.ec2.internal?timeout=10s - read tcp 10.0.120.255:56012->10.0.102.111:6443: read: connection reset by peer
2026-04-19T22:30:43Z node/ip-10-0-102-221.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gpsw0sr9-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-102-221.ec2.internal?timeout=10s - read tcp 10.0.102.221:39808->10.0.102.111:6443: read: connection reset by peer
2026-04-19T22:30:47Z node/ip-10-0-105-254.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gpsw0sr9-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-105-254.ec2.internal?timeout=10s - read tcp 10.0.105.254:60158->10.0.102.111:6443: read: connection reset by peer
2026-04-19T22:30:50Z node/ip-10-0-71-199.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gpsw0sr9-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-199.ec2.internal?timeout=10s - read tcp 10.0.71.199:43982->10.0.27.75:6443: read: connection reset by peer
2026-04-19T22:35:07Z node/ip-10-0-102-221.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gpsw0sr9-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-102-221.ec2.internal?timeout=10s - read tcp 10.0.102.221:49380->10.0.27.75:6443: read: connection reset by peer
2026-04-19T22:50:00Z node/ip-10-0-20-136.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-gpsw0sr9-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-20-136.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045822624621137920junit31 hours ago
Apr 19 12:54:22.840 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane: failed to apply / update (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane: Patch "https://api-int.ci-op-i5d36h8s-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-ovn-kubernetes/services/ovn-kubernetes-control-plane?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.79.122:54580->10.0.29.157:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 12:54:22.840 - 29s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane: failed to apply / update (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane: Patch "https://api-int.ci-op-i5d36h8s-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-ovn-kubernetes/services/ovn-kubernetes-control-plane?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.79.122:54580->10.0.29.157:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 12:54:51.867 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
Apr 19 12:58:20.237 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm: failed to apply / update (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm: Patch "https://api-int.ci-op-i5d36h8s-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-network-node-identity/configmaps/ovnkube-identity-cm?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.79.122:39722->10.0.68.198:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 12:58:20.237 - 28s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm: failed to apply / update (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm: Patch "https://api-int.ci-op-i5d36h8s-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-network-node-identity/configmaps/ovnkube-identity-cm?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.79.122:39722->10.0.68.198:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 12:58:49.195 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2045822624621137920junit31 hours ago
2026-04-19T11:59:15Z node/ip-10-0-69-86.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i5d36h8s-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-69-86.us-east-2.compute.internal?timeout=10s - read tcp 10.0.69.86:32776->10.0.68.198:6443: read: connection reset by peer
2026-04-19T12:50:19Z node/ip-10-0-77-76.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i5d36h8s-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-77-76.us-east-2.compute.internal?timeout=10s - read tcp 10.0.77.76:33760->10.0.68.198:6443: read: connection reset by peer
2026-04-19T12:54:19Z node/ip-10-0-51-225.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i5d36h8s-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-225.us-east-2.compute.internal?timeout=10s - read tcp 10.0.51.225:58650->10.0.68.198:6443: read: connection reset by peer
2026-04-19T12:58:55Z node/ip-10-0-51-225.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i5d36h8s-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-225.us-east-2.compute.internal?timeout=10s - read tcp 10.0.51.225:47032->10.0.68.198:6443: read: connection reset by peer
2026-04-19T13:03:08Z node/ip-10-0-79-122.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i5d36h8s-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-79-122.us-east-2.compute.internal?timeout=10s - read tcp 10.0.79.122:37758->10.0.29.157:6443: read: connection reset by peer
2026-04-19T13:03:35Z node/ip-10-0-27-19.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i5d36h8s-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-27-19.us-east-2.compute.internal?timeout=10s - read tcp 10.0.27.19:59166->10.0.29.157:6443: read: connection reset by peer
2026-04-19T13:06:51Z node/ip-10-0-79-122.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i5d36h8s-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-79-122.us-east-2.compute.internal?timeout=10s - read tcp 10.0.79.122:37688->10.0.29.157:6443: read: connection reset by peer
2026-04-19T13:07:01Z node/ip-10-0-79-122.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i5d36h8s-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-79-122.us-east-2.compute.internal?timeout=10s - read tcp 10.0.79.122:55792->10.0.68.198:6443: read: connection reset by peer
2026-04-19T13:07:10Z node/ip-10-0-77-76.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i5d36h8s-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-77-76.us-east-2.compute.internal?timeout=10s - read tcp 10.0.77.76:50696->10.0.68.198:6443: read: connection reset by peer
2026-04-19T13:10:45Z node/ip-10-0-69-86.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i5d36h8s-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-69-86.us-east-2.compute.internal?timeout=10s - read tcp 10.0.69.86:53146->10.0.29.157:6443: read: connection reset by peer
2026-04-19T13:11:05Z node/ip-10-0-77-76.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i5d36h8s-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-77-76.us-east-2.compute.internal?timeout=10s - read tcp 10.0.77.76:60456->10.0.68.198:6443: read: connection reset by peer
2026-04-19T13:24:51Z node/ip-10-0-27-19.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i5d36h8s-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-27-19.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045686204527022080junit40 hours ago
2026-04-19T02:57:09Z node/ip-10-0-6-102.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-y0q8mw4z-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-102.us-west-2.compute.internal?timeout=10s - read tcp 10.0.6.102:41194->10.0.35.99:6443: read: connection reset by peer
2026-04-19T04:07:48Z node/ip-10-0-114-75.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-y0q8mw4z-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-114-75.us-west-2.compute.internal?timeout=10s - read tcp 10.0.114.75:37548->10.0.65.40:6443: read: connection reset by peer
2026-04-19T04:07:53Z node/ip-10-0-54-192.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-y0q8mw4z-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-54-192.us-west-2.compute.internal?timeout=10s - read tcp 10.0.54.192:39502->10.0.35.99:6443: read: connection reset by peer
2026-04-19T04:07:56Z node/ip-10-0-118-4.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-y0q8mw4z-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-118-4.us-west-2.compute.internal?timeout=10s - read tcp 10.0.118.4:33666->10.0.65.40:6443: read: connection reset by peer
2026-04-19T04:11:44Z node/ip-10-0-6-102.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-y0q8mw4z-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-102.us-west-2.compute.internal?timeout=10s - read tcp 10.0.6.102:57566->10.0.65.40:6443: read: connection reset by peer
2026-04-19T04:31:14Z node/ip-10-0-114-75.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-y0q8mw4z-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-114-75.us-west-2.compute.internal?timeout=10s - context deadline exceeded
#2045605698900856832junit45 hours ago
2026-04-18T22:47:24Z node/ip-10-0-76-43.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zxp51p4f-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-76-43.us-east-2.compute.internal?timeout=10s - read tcp 10.0.76.43:51524->10.0.70.230:6443: read: connection reset by peer
2026-04-18T22:55:15Z node/ip-10-0-76-43.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zxp51p4f-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-76-43.us-east-2.compute.internal?timeout=10s - read tcp 10.0.76.43:51956->10.0.70.230:6443: read: connection reset by peer
2026-04-18T23:00:09Z node/ip-10-0-19-209.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zxp51p4f-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-19-209.us-east-2.compute.internal?timeout=10s - read tcp 10.0.19.209:36640->10.0.70.230:6443: read: connection reset by peer
2026-04-18T23:00:09Z node/ip-10-0-110-221.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zxp51p4f-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-110-221.us-east-2.compute.internal?timeout=10s - read tcp 10.0.110.221:52998->10.0.60.85:6443: read: connection reset by peer
2026-04-18T23:07:45Z node/ip-10-0-117-166.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zxp51p4f-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-117-166.us-east-2.compute.internal?timeout=10s - read tcp 10.0.117.166:56632->10.0.70.230:6443: read: connection reset by peer
2026-04-18T23:07:51Z node/ip-10-0-76-43.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zxp51p4f-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-76-43.us-east-2.compute.internal?timeout=10s - read tcp 10.0.76.43:34884->10.0.60.85:6443: read: connection reset by peer
2026-04-18T23:21:05Z node/ip-10-0-9-40.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zxp51p4f-5acd1.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-40.us-east-2.compute.internal?timeout=10s - context deadline exceeded
periodic-ci-openshift-release-main-nightly-5.0-e2e-aws-ovn-single-node (all) - 8 runs, 0% failed, 13% of runs match
#2046041305691197440junit16 hours ago
    <*fmt.wrapError | 0xc00a122020>:
    error reading from error stream: read message: read tcp 172.24.136.10:56076->52.88.91.46:6443: read: connection reset by peer
    {
        msg: "error reading from error stream: read message: read tcp 172.24.136.10:56076->52.88.91.46:6443: read: connection reset by peer",
        err: <*fmt.wrapError | 0xc009f57c00>{
            msg: "read message: read tcp 172.24.136.10:56076->52.88.91.46:6443: read: connection reset by peer",
            err: <*net.OpError | 0xc0019716d0>{
periodic-ci-openshift-release-main-nightly-4.20-e2e-metal-ipi-ovn-upgrade (all) - 6 runs, 0% failed, 83% of runs match
#2046033497205772288junit16 hours ago
2026-04-20T03:19:50Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:42966->192.168.111.5:6443: read: connection reset by peer
2026-04-20T03:26:04Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:40176->192.168.111.5:6443: read: connection reset by peer
2026-04-20T03:26:05Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:44240->192.168.111.5:6443: read: connection reset by peer
2026-04-20T03:26:09Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:48950->192.168.111.5:6443: read: connection reset by peer
#2046033497205772288junit16 hours ago
2026-04-20T03:19:50Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:42966->192.168.111.5:6443: read: connection reset by peer
2026-04-20T03:26:04Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:40176->192.168.111.5:6443: read: connection reset by peer
2026-04-20T03:26:05Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:44240->192.168.111.5:6443: read: connection reset by peer
2026-04-20T03:26:09Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:48950->192.168.111.5:6443: read: connection reset by peer
#2045949731275804672junit21 hours ago
2026-04-19T22:21:00Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - write tcp 192.168.111.21:51890->192.168.111.5:6443: write: connection reset by peer
2026-04-19T22:21:03Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:45300->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:21:07Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:40322->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:21:08Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:42990->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:21:09Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:50280->192.168.111.5:6443: read: connection reset by peer
#2045949731275804672junit21 hours ago
Apr 19 22:21:33.996 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (admissionregistration.k8s.io/v1, Kind=ValidatingAdmissionPolicy) /user-defined-networks-namespace-label: failed to apply / update (admissionregistration.k8s.io/v1, Kind=ValidatingAdmissionPolicy) /user-defined-networks-namespace-label: Patch "https://api-int.ostest.test.metalkube.org:6443/apis/admissionregistration.k8s.io/v1/validatingadmissionpolicies/user-defined-networks-namespace-label?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 192.168.111.20:50056->192.168.111.5:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 19 22:21:33.996 - 432ms E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (admissionregistration.k8s.io/v1, Kind=ValidatingAdmissionPolicy) /user-defined-networks-namespace-label: failed to apply / update (admissionregistration.k8s.io/v1, Kind=ValidatingAdmissionPolicy) /user-defined-networks-namespace-label: Patch "https://api-int.ostest.test.metalkube.org:6443/apis/admissionregistration.k8s.io/v1/validatingadmissionpolicies/user-defined-networks-namespace-label?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 192.168.111.20:50056->192.168.111.5:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 19 22:21:34.428 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2045811530934521856junit31 hours ago
2026-04-19T12:45:51Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:53004->192.168.111.5:6443: read: connection reset by peer
2026-04-19T12:45:55Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - write tcp 192.168.111.21:35726->192.168.111.5:6443: write: connection reset by peer
2026-04-19T12:45:56Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:41974->192.168.111.5:6443: read: connection reset by peer
2026-04-19T12:45:56Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:48380->192.168.111.5:6443: read: connection reset by peer
#2045811530934521856junit31 hours ago
2026-04-19T12:45:51Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:53004->192.168.111.5:6443: read: connection reset by peer
2026-04-19T12:45:55Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - write tcp 192.168.111.21:35726->192.168.111.5:6443: write: connection reset by peer
2026-04-19T12:45:56Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:41974->192.168.111.5:6443: read: connection reset by peer
2026-04-19T12:45:56Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:48380->192.168.111.5:6443: read: connection reset by peer
#2045679401844084736junit40 hours ago
2026-04-19T04:11:37Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:60612->192.168.111.5:6443: read: connection reset by peer
2026-04-19T04:11:38Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:55168->192.168.111.5:6443: read: connection reset by peer
2026-04-19T04:11:39Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:58160->192.168.111.5:6443: read: connection reset by peer
2026-04-19T04:11:39Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:53510->192.168.111.5:6443: read: connection reset by peer
2026-04-19T04:23:48Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - write tcp 192.168.111.21:54492->192.168.111.5:6443: write: connection reset by peer
2026-04-19T04:23:48Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:45468->192.168.111.5:6443: read: connection reset by peer
2026-04-19T04:23:49Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:46828->192.168.111.5:6443: read: connection reset by peer
2026-04-19T04:23:55Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:45168->192.168.111.5:6443: read: connection reset by peer
#2045600296956071936junit45 hours ago
2026-04-18T22:38:16Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:38004->192.168.111.5:6443: read: connection reset by peer
2026-04-18T22:38:20Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:43554->192.168.111.5:6443: read: connection reset by peer
2026-04-18T22:38:24Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:36264->192.168.111.5:6443: read: connection reset by peer
#2045600296956071936junit45 hours ago
2026-04-18T22:38:16Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:38004->192.168.111.5:6443: read: connection reset by peer
2026-04-18T22:38:20Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:43554->192.168.111.5:6443: read: connection reset by peer
2026-04-18T22:38:24Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:36264->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.20-e2e-aws-runc-upgrade (all) - 2 runs, 0% failed, 100% of runs match
#2046031863641804800junit16 hours ago
2026-04-20T02:43:28Z node/ip-10-0-40-66.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i4j1yggm-f3926.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-40-66.us-west-2.compute.internal?timeout=10s - context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2026-04-20T02:54:10Z node/ip-10-0-58-217.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i4j1yggm-f3926.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-217.us-west-2.compute.internal?timeout=10s - read tcp 10.0.58.217:55448->10.0.124.253:6443: read: connection reset by peer
2026-04-20T03:01:37Z node/ip-10-0-85-179.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i4j1yggm-f3926.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-85-179.us-west-2.compute.internal?timeout=10s - read tcp 10.0.85.179:59056->10.0.59.33:6443: read: connection reset by peer
#2046031863641804800junit16 hours ago
2026-04-20T02:43:28Z node/ip-10-0-40-66.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i4j1yggm-f3926.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-40-66.us-west-2.compute.internal?timeout=10s - context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2026-04-20T02:54:10Z node/ip-10-0-58-217.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i4j1yggm-f3926.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-217.us-west-2.compute.internal?timeout=10s - read tcp 10.0.58.217:55448->10.0.124.253:6443: read: connection reset by peer
2026-04-20T03:01:37Z node/ip-10-0-85-179.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i4j1yggm-f3926.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-85-179.us-west-2.compute.internal?timeout=10s - read tcp 10.0.85.179:59056->10.0.59.33:6443: read: connection reset by peer
#2045669471854530560junit40 hours ago
2026-04-19T02:03:28Z node/ip-10-0-120-255.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-v16kv2rb-f3926.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-255.ec2.internal?timeout=10s - read tcp 10.0.120.255:54820->10.0.58.111:6443: read: connection reset by peer
2026-04-19T02:24:23Z node/ip-10-0-120-255.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-v16kv2rb-f3926.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-255.ec2.internal?timeout=10s - read tcp 10.0.120.255:40996->10.0.105.244:6443: read: connection reset by peer
2026-04-19T02:24:40Z node/ip-10-0-26-178.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-v16kv2rb-f3926.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-178.ec2.internal?timeout=10s - read tcp 10.0.26.178:52572->10.0.105.244:6443: read: connection reset by peer
2026-04-19T02:28:18Z node/ip-10-0-120-255.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-v16kv2rb-f3926.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-255.ec2.internal?timeout=10s - read tcp 10.0.120.255:33276->10.0.105.244:6443: read: connection reset by peer
2026-04-19T02:32:03Z node/ip-10-0-71-35.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-v16kv2rb-f3926.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-35.ec2.internal?timeout=10s - read tcp 10.0.71.35:41822->10.0.58.111:6443: read: connection reset by peer
2026-04-19T02:32:05Z node/ip-10-0-87-66.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-v16kv2rb-f3926.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-87-66.ec2.internal?timeout=10s - read tcp 10.0.87.66:59666->10.0.105.244:6443: read: connection reset by peer
2026-04-19T02:32:10Z node/ip-10-0-82-194.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-v16kv2rb-f3926.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-82-194.ec2.internal?timeout=10s - read tcp 10.0.82.194:58830->10.0.105.244:6443: read: connection reset by peer
2026-04-19T02:32:13Z node/ip-10-0-120-255.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-v16kv2rb-f3926.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-255.ec2.internal?timeout=10s - read tcp 10.0.120.255:39936->10.0.105.244:6443: read: connection reset by peer
2026-04-19T02:32:19Z node/ip-10-0-59-67.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-v16kv2rb-f3926.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-59-67.ec2.internal?timeout=10s - read tcp 10.0.59.67:47486->10.0.58.111:6443: read: connection reset by peer
2026-04-19T02:47:19Z node/ip-10-0-26-178.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-v16kv2rb-f3926.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-178.ec2.internal?timeout=10s - context deadline exceeded (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-arbiter-upgrade-workers (all) - 2 runs, 0% failed, 100% of runs match
#2046045453237096448junit16 hours ago
2026-04-20T04:42:52Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-20T04:45:39Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:51462->192.168.111.5:6443: read: connection reset by peer
2026-04-20T04:45:43Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:55920->192.168.111.5:6443: read: connection reset by peer
#2045683061756006400junit41 hours ago
2026-04-19T04:03:23Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T04:31:06Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - read tcp 192.168.111.22:36468->192.168.111.5:6443: read: connection reset by peer
2026-04-19T04:31:06Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - write tcp 192.168.111.21:53780->192.168.111.5:6443: write: connection reset by peer
2026-04-19T04:31:10Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:48262->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-csi-operator-release-4.20-periodics-periodic-e2e-aws-efs-csi (all) - 2 runs, 0% failed, 50% of runs match
#2046062565577986048junit17 hours ago
2026-04-20T04:10:03Z node/ip-10-0-58-46.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-jvs0qffk-2bb9a.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-58-46.us-west-2.compute.internal?timeout=10s - read tcp 10.0.58.46:48332->10.0.96.44:6443: read: connection reset by peer
periodic-ci-openshift-cluster-storage-operator-release-4.22-periodics-periodic-e2e-aws-ovn-upgrade-check-dev-symlinks (all) - 2 runs, 100% failed, 100% of failures match = 100% impact
#2046050737967861760junit17 hours ago
2026-04-20T03:28:36Z node/ip-10-0-95-218.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivm1dwkb-6b2a5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-95-218.ec2.internal?timeout=10s - read tcp 10.0.95.218:60736->10.0.98.148:6443: read: connection reset by peer
2026-04-20T03:28:38Z node/ip-10-0-73-111.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivm1dwkb-6b2a5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-73-111.ec2.internal?timeout=10s - read tcp 10.0.73.111:56400->10.0.52.110:6443: read: connection reset by peer
2026-04-20T03:28:48Z node/ip-10-0-73-111.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivm1dwkb-6b2a5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-73-111.ec2.internal?timeout=10s - read tcp 10.0.73.111:56750->10.0.52.110:6443: read: connection reset by peer
2026-04-20T03:32:38Z node/ip-10-0-15-220.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivm1dwkb-6b2a5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-15-220.ec2.internal?timeout=10s - read tcp 10.0.15.220:51658->10.0.98.148:6443: read: connection reset by peer
2026-04-20T04:13:15Z node/ip-10-0-73-111.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivm1dwkb-6b2a5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-73-111.ec2.internal?timeout=10s - read tcp 10.0.73.111:57964->10.0.98.148:6443: read: connection reset by peer
2026-04-20T04:21:22Z node/ip-10-0-66-7.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivm1dwkb-6b2a5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-66-7.ec2.internal?timeout=10s - read tcp 10.0.66.7:35136->10.0.52.110:6443: read: connection reset by peer
2026-04-20T04:24:59Z node/ip-10-0-95-218.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivm1dwkb-6b2a5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-95-218.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045688345777934336junit40 hours ago
2026-04-19T03:17:01Z node/ip-10-0-104-24.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3tsi2bil-6b2a5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-104-24.us-east-2.compute.internal?timeout=10s - read tcp 10.0.104.24:45640->10.0.78.119:6443: read: connection reset by peer
2026-04-19T03:17:02Z node/ip-10-0-49-192.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3tsi2bil-6b2a5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-49-192.us-east-2.compute.internal?timeout=10s - read tcp 10.0.49.192:35070->10.0.0.231:6443: read: connection reset by peer
2026-04-19T03:17:09Z node/ip-10-0-99-88.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3tsi2bil-6b2a5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-99-88.us-east-2.compute.internal?timeout=10s - read tcp 10.0.99.88:42436->10.0.0.231:6443: read: connection reset by peer
2026-04-19T03:17:09Z node/ip-10-0-107-34.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3tsi2bil-6b2a5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-107-34.us-east-2.compute.internal?timeout=10s - read tcp 10.0.107.34:42880->10.0.78.119:6443: read: connection reset by peer
2026-04-19T03:37:13Z node/ip-10-0-23-109.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3tsi2bil-6b2a5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-109.us-east-2.compute.internal?timeout=10s - read tcp 10.0.23.109:37976->10.0.78.119:6443: read: connection reset by peer
2026-04-19T03:37:15Z node/ip-10-0-104-24.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3tsi2bil-6b2a5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-104-24.us-east-2.compute.internal?timeout=10s - read tcp 10.0.104.24:48342->10.0.0.231:6443: read: connection reset by peer
2026-04-19T03:41:27Z node/ip-10-0-107-34.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3tsi2bil-6b2a5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-107-34.us-east-2.compute.internal?timeout=10s - read tcp 10.0.107.34:44286->10.0.78.119:6443: read: connection reset by peer
2026-04-19T03:45:35Z node/ip-10-0-23-109.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3tsi2bil-6b2a5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-23-109.us-east-2.compute.internal?timeout=10s - read tcp 10.0.23.109:48882->10.0.0.231:6443: read: connection reset by peer
2026-04-19T03:45:36Z node/ip-10-0-104-24.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3tsi2bil-6b2a5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-104-24.us-east-2.compute.internal?timeout=10s - read tcp 10.0.104.24:57928->10.0.0.231:6443: read: connection reset by peer
2026-04-19T04:35:28Z node/ip-10-0-107-34.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3tsi2bil-6b2a5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-107-34.us-east-2.compute.internal?timeout=10s - read tcp 10.0.107.34:59914->10.0.0.231:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.21-e2e-metal-ovn-two-node-arbiter-upgrade-day-2-workers (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#2046050234999508992junit17 hours ago
2026-04-20T04:12:27Z node/extraworker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/extraworker-1?timeout=10s - read tcp 192.168.111.24:54446->192.168.111.5:6443: read: connection reset by peer
2026-04-20T04:12:30Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - read tcp 192.168.111.22:37518->192.168.111.5:6443: read: connection reset by peer
2026-04-20T04:12:31Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:49160->192.168.111.5:6443: read: connection reset by peer
2026-04-20T04:12:32Z node/extraworker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/extraworker-0?timeout=10s - read tcp 192.168.111.23:40584->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.18-e2e-aws-crun-upgrade (all) - 2 runs, 0% failed, 100% of runs match
#2046030101723746304junit17 hours ago
2026-04-20T01:47:44Z node/ip-10-0-52-122.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5p7lg1d8-89776.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-122.ec2.internal?timeout=10s - read tcp 10.0.52.122:35670->10.0.83.205:6443: read: connection reset by peer
2026-04-20T01:47:45Z node/ip-10-0-13-75.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5p7lg1d8-89776.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-75.ec2.internal?timeout=10s - read tcp 10.0.13.75:33104->10.0.83.205:6443: read: connection reset by peer
2026-04-20T01:47:47Z node/ip-10-0-112-45.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5p7lg1d8-89776.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-112-45.ec2.internal?timeout=10s - read tcp 10.0.112.45:49248->10.0.83.205:6443: read: connection reset by peer
2026-04-20T01:47:55Z node/ip-10-0-13-75.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5p7lg1d8-89776.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-75.ec2.internal?timeout=10s - read tcp 10.0.13.75:43082->10.0.54.15:6443: read: connection reset by peer
2026-04-20T02:03:30Z node/ip-10-0-105-211.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5p7lg1d8-89776.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-105-211.ec2.internal?timeout=10s - read tcp 10.0.105.211:49556->10.0.83.205:6443: read: connection reset by peer
2026-04-20T02:03:53Z node/ip-10-0-52-122.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5p7lg1d8-89776.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-122.ec2.internal?timeout=10s - read tcp 10.0.52.122:35688->10.0.54.15:6443: read: connection reset by peer
2026-04-20T02:07:50Z node/ip-10-0-13-75.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5p7lg1d8-89776.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-75.ec2.internal?timeout=10s - read tcp 10.0.13.75:57130->10.0.54.15:6443: read: connection reset by peer
2026-04-20T02:47:18Z node/ip-10-0-106-148.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5p7lg1d8-89776.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-106-148.ec2.internal?timeout=10s - read tcp 10.0.106.148:35078->10.0.83.205:6443: read: connection reset by peer
2026-04-20T02:54:04Z node/ip-10-0-52-122.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5p7lg1d8-89776.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-122.ec2.internal?timeout=10s - read tcp 10.0.52.122:58668->10.0.54.15:6443: read: connection reset by peer
2026-04-20T02:54:19Z node/ip-10-0-112-45.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5p7lg1d8-89776.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-112-45.ec2.internal?timeout=10s - read tcp 10.0.112.45:60672->10.0.54.15:6443: read: connection reset by peer
#2045667710695641088junit41 hours ago
2026-04-19T01:47:47Z node/ip-10-0-36-238.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1bt2sigf-89776.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-238.us-east-2.compute.internal?timeout=10s - read tcp 10.0.36.238:38796->10.0.24.112:6443: read: connection reset by peer
2026-04-19T02:01:25Z node/ip-10-0-117-185.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1bt2sigf-89776.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-117-185.us-east-2.compute.internal?timeout=10s - read tcp 10.0.117.185:45568->10.0.77.203:6443: read: connection reset by peer
2026-04-19T02:50:32Z node/ip-10-0-27-61.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1bt2sigf-89776.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-27-61.us-east-2.compute.internal?timeout=10s - read tcp 10.0.27.61:46248->10.0.24.112:6443: read: connection reset by peer
#2045667710695641088junit41 hours ago
2026-04-19T01:47:47Z node/ip-10-0-36-238.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1bt2sigf-89776.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-238.us-east-2.compute.internal?timeout=10s - read tcp 10.0.36.238:38796->10.0.24.112:6443: read: connection reset by peer
2026-04-19T02:01:25Z node/ip-10-0-117-185.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1bt2sigf-89776.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-117-185.us-east-2.compute.internal?timeout=10s - read tcp 10.0.117.185:45568->10.0.77.203:6443: read: connection reset by peer
2026-04-19T02:50:32Z node/ip-10-0-27-61.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1bt2sigf-89776.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-27-61.us-east-2.compute.internal?timeout=10s - read tcp 10.0.27.61:46248->10.0.24.112:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.20-e2e-aws-ovn-serial-2of2 (all) - 8 runs, 38% failed, 167% of failures match = 63% impact
#2046033540847505408junit17 hours ago
2026-04-20T01:56:04Z node/ip-10-0-30-72.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rbct8p8-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-72.ec2.internal?timeout=10s - read tcp 10.0.30.72:50396->10.0.66.253:6443: read: connection reset by peer
2026-04-20T01:56:10Z node/ip-10-0-111-210.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rbct8p8-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-111-210.ec2.internal?timeout=10s - read tcp 10.0.111.210:57022->10.0.66.253:6443: read: connection reset by peer
2026-04-20T01:56:23Z node/ip-10-0-124-24.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rbct8p8-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-124-24.ec2.internal?timeout=10s - read tcp 10.0.124.24:50912->10.0.66.253:6443: read: connection reset by peer
2026-04-20T01:56:32Z node/ip-10-0-16-54.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rbct8p8-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-54.ec2.internal?timeout=10s - read tcp 10.0.16.54:53754->10.0.66.253:6443: read: connection reset by peer
2026-04-20T02:57:18Z node/ip-10-0-30-72.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rbct8p8-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-72.ec2.internal?timeout=10s - read tcp 10.0.30.72:55338->10.0.66.253:6443: read: connection reset by peer
2026-04-20T03:05:23Z node/ip-10-0-111-210.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rbct8p8-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-111-210.ec2.internal?timeout=10s - read tcp 10.0.111.210:51264->10.0.66.253:6443: read: connection reset by peer
2026-04-20T03:09:53Z node/ip-10-0-30-72.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rbct8p8-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-30-72.ec2.internal?timeout=10s - read tcp 10.0.30.72:56166->10.0.48.81:6443: read: connection reset by peer
2026-04-20T03:10:01Z node/ip-10-0-115-190.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rbct8p8-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-115-190.ec2.internal?timeout=10s - read tcp 10.0.115.190:56976->10.0.66.253:6443: read: connection reset by peer
2026-04-20T03:13:46Z node/ip-10-0-16-54.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rbct8p8-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-54.ec2.internal?timeout=10s - read tcp 10.0.16.54:34612->10.0.48.81:6443: read: connection reset by peer
2026-04-20T03:14:17Z node/ip-10-0-115-190.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rbct8p8-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-115-190.ec2.internal?timeout=10s - read tcp 10.0.115.190:47028->10.0.66.253:6443: read: connection reset by peer
2026-04-20T03:14:33Z node/ip-10-0-111-210.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rbct8p8-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-111-210.ec2.internal?timeout=10s - read tcp 10.0.111.210:37670->10.0.66.253:6443: read: connection reset by peer
2026-04-20T03:17:52Z node/ip-10-0-124-24.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rbct8p8-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-124-24.ec2.internal?timeout=10s - read tcp 10.0.124.24:37486->10.0.48.81:6443: read: connection reset by peer
2026-04-20T03:17:57Z node/ip-10-0-111-210.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rbct8p8-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-111-210.ec2.internal?timeout=10s - read tcp 10.0.111.210:58494->10.0.66.253:6443: read: connection reset by peer
2026-04-20T03:35:20Z node/ip-10-0-82-227.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rbct8p8-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-82-227.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045949763970404352junit23 hours ago
2026-04-19T20:29:13Z node/ip-10-0-76-31.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kgxv1inh-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-76-31.us-west-2.compute.internal?timeout=10s - read tcp 10.0.76.31:48354->10.0.97.36:6443: read: connection reset by peer
2026-04-19T20:29:27Z node/ip-10-0-37-43.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kgxv1inh-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-37-43.us-west-2.compute.internal?timeout=10s - read tcp 10.0.37.43:57448->10.0.49.253:6443: read: connection reset by peer
2026-04-19T21:26:30Z node/ip-10-0-60-174.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kgxv1inh-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-174.us-west-2.compute.internal?timeout=10s - read tcp 10.0.60.174:39270->10.0.97.36:6443: read: connection reset by peer
2026-04-19T21:30:24Z node/ip-10-0-123-204.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kgxv1inh-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-123-204.us-west-2.compute.internal?timeout=10s - read tcp 10.0.123.204:51436->10.0.49.253:6443: read: connection reset by peer
2026-04-19T21:34:20Z node/ip-10-0-123-204.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kgxv1inh-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-123-204.us-west-2.compute.internal?timeout=10s - read tcp 10.0.123.204:40052->10.0.49.253:6443: read: connection reset by peer
2026-04-19T21:38:56Z node/ip-10-0-123-204.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kgxv1inh-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-123-204.us-west-2.compute.internal?timeout=10s - read tcp 10.0.123.204:42694->10.0.49.253:6443: read: connection reset by peer
2026-04-19T21:39:01Z node/ip-10-0-37-27.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kgxv1inh-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-37-27.us-west-2.compute.internal?timeout=10s - read tcp 10.0.37.27:44960->10.0.97.36:6443: read: connection reset by peer
2026-04-19T21:39:07Z node/ip-10-0-60-174.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kgxv1inh-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-174.us-west-2.compute.internal?timeout=10s - read tcp 10.0.60.174:58998->10.0.49.253:6443: read: connection reset by peer
2026-04-19T21:39:12Z node/ip-10-0-37-27.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kgxv1inh-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-37-27.us-west-2.compute.internal?timeout=10s - read tcp 10.0.37.27:33136->10.0.49.253:6443: read: connection reset by peer
2026-04-19T21:42:45Z node/ip-10-0-37-27.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kgxv1inh-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-37-27.us-west-2.compute.internal?timeout=10s - read tcp 10.0.37.27:49818->10.0.49.253:6443: read: connection reset by peer
2026-04-19T21:42:54Z node/ip-10-0-37-43.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kgxv1inh-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-37-43.us-west-2.compute.internal?timeout=10s - read tcp 10.0.37.43:59506->10.0.49.253:6443: read: connection reset by peer
2026-04-19T21:43:00Z node/ip-10-0-27-125.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kgxv1inh-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-27-125.us-west-2.compute.internal?timeout=10s - read tcp 10.0.27.125:35586->10.0.49.253:6443: read: connection reset by peer
2026-04-19T22:00:10Z node/ip-10-0-76-31.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kgxv1inh-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-76-31.us-west-2.compute.internal?timeout=10s - context deadline exceeded
#2045811649692045312junit32 hours ago
2026-04-19T11:14:25Z node/ip-10-0-5-199.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fvqdk321-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-199.ec2.internal?timeout=10s - read tcp 10.0.5.199:55240->10.0.15.19:6443: read: connection reset by peer
2026-04-19T11:14:42Z node/ip-10-0-18-37.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fvqdk321-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-18-37.ec2.internal?timeout=10s - read tcp 10.0.18.37:57892->10.0.15.19:6443: read: connection reset by peer
2026-04-19T12:16:12Z node/ip-10-0-80-63.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fvqdk321-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-80-63.ec2.internal?timeout=10s - read tcp 10.0.80.63:48044->10.0.15.19:6443: read: connection reset by peer
2026-04-19T12:20:35Z node/ip-10-0-5-199.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fvqdk321-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-199.ec2.internal?timeout=10s - read tcp 10.0.5.199:41782->10.0.15.19:6443: read: connection reset by peer
2026-04-19T12:20:37Z node/ip-10-0-29-2.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fvqdk321-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-29-2.ec2.internal?timeout=10s - read tcp 10.0.29.2:53556->10.0.15.19:6443: read: connection reset by peer
2026-04-19T12:20:38Z node/ip-10-0-80-63.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fvqdk321-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-80-63.ec2.internal?timeout=10s - read tcp 10.0.80.63:56358->10.0.15.19:6443: read: connection reset by peer
2026-04-19T12:24:09Z node/ip-10-0-101-160.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fvqdk321-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-101-160.ec2.internal?timeout=10s - read tcp 10.0.101.160:56272->10.0.108.132:6443: read: connection reset by peer
2026-04-19T12:24:21Z node/ip-10-0-39-38.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fvqdk321-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-38.ec2.internal?timeout=10s - read tcp 10.0.39.38:51020->10.0.15.19:6443: read: connection reset by peer
2026-04-19T12:24:29Z node/ip-10-0-5-199.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fvqdk321-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-199.ec2.internal?timeout=10s - read tcp 10.0.5.199:37544->10.0.15.19:6443: read: connection reset by peer
2026-04-19T12:28:17Z node/ip-10-0-29-2.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fvqdk321-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-29-2.ec2.internal?timeout=10s - read tcp 10.0.29.2:39382->10.0.15.19:6443: read: connection reset by peer
2026-04-19T12:28:25Z node/ip-10-0-101-160.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fvqdk321-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-101-160.ec2.internal?timeout=10s - read tcp 10.0.101.160:55234->10.0.15.19:6443: read: connection reset by peer
2026-04-19T12:42:13Z node/ip-10-0-80-63.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-fvqdk321-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-80-63.ec2.internal?timeout=10s - context deadline exceeded
#2045679423327309824junit41 hours ago
2026-04-19T03:18:47Z node/ip-10-0-17-202.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7j611x1z-fb179.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-17-202.ec2.internal?timeout=10s - read tcp 10.0.17.202:47380->10.0.110.141:6443: read: connection reset by peer
2026-04-19T03:31:41Z node/ip-10-0-81-187.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7j611x1z-fb179.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-81-187.ec2.internal?timeout=10s - read tcp 10.0.81.187:51554->10.0.46.9:6443: read: connection reset by peer
2026-04-19T03:35:33Z node/ip-10-0-59-202.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7j611x1z-fb179.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-59-202.ec2.internal?timeout=10s - read tcp 10.0.59.202:33984->10.0.46.9:6443: read: connection reset by peer
2026-04-19T03:39:21Z node/ip-10-0-63-36.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7j611x1z-fb179.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-63-36.ec2.internal?timeout=10s - read tcp 10.0.63.36:47692->10.0.46.9:6443: read: connection reset by peer
2026-04-19T03:39:43Z node/ip-10-0-44-31.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7j611x1z-fb179.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-44-31.ec2.internal?timeout=10s - read tcp 10.0.44.31:36734->10.0.110.141:6443: read: connection reset by peer
2026-04-19T04:00:54Z node/ip-10-0-81-187.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-7j611x1z-fb179.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-81-187.ec2.internal?timeout=10s - context deadline exceeded
#2045600306158374912junit46 hours ago
2026-04-18T21:18:16Z node/ip-10-0-71-174.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zyikr1ds-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-174.us-west-2.compute.internal?timeout=10s - read tcp 10.0.71.174:49170->10.0.64.148:6443: read: connection reset by peer
2026-04-18T22:17:56Z node/ip-10-0-125-102.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zyikr1ds-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-125-102.us-west-2.compute.internal?timeout=10s - read tcp 10.0.125.102:46104->10.0.64.148:6443: read: connection reset by peer
2026-04-18T22:21:20Z node/ip-10-0-125-102.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zyikr1ds-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-125-102.us-west-2.compute.internal?timeout=10s - read tcp 10.0.125.102:58308->10.0.57.61:6443: read: connection reset by peer
2026-04-18T22:25:49Z node/ip-10-0-71-174.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zyikr1ds-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-174.us-west-2.compute.internal?timeout=10s - read tcp 10.0.71.174:50432->10.0.64.148:6443: read: connection reset by peer
2026-04-18T22:34:04Z node/ip-10-0-125-102.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zyikr1ds-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-125-102.us-west-2.compute.internal?timeout=10s - read tcp 10.0.125.102:41920->10.0.64.148:6443: read: connection reset by peer
2026-04-18T22:34:20Z node/ip-10-0-71-174.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zyikr1ds-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-174.us-west-2.compute.internal?timeout=10s - read tcp 10.0.71.174:52150->10.0.57.61:6443: read: connection reset by peer
2026-04-18T22:38:06Z node/ip-10-0-71-174.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zyikr1ds-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-174.us-west-2.compute.internal?timeout=10s - read tcp 10.0.71.174:43186->10.0.64.148:6443: read: connection reset by peer
2026-04-18T22:38:32Z node/ip-10-0-122-219.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zyikr1ds-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-122-219.us-west-2.compute.internal?timeout=10s - read tcp 10.0.122.219:60848->10.0.64.148:6443: read: connection reset by peer
2026-04-18T22:51:46Z node/ip-10-0-73-51.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-zyikr1ds-fb179.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-73-51.us-west-2.compute.internal?timeout=10s - context deadline exceeded
periodic-ci-openshift-release-main-nightly-4.20-e2e-metal-ovn-two-node-arbiter-upgrade (all) - 2 runs, 0% failed, 50% of runs match
#2046042439872942080junit18 hours ago
2026-04-20T03:35:06Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - write tcp 192.168.111.20:32880->192.168.111.5:6443: write: connection reset by peer
2026-04-20T03:35:11Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - read tcp 192.168.111.22:47476->192.168.111.5:6443: read: connection reset by peer
2026-04-20T03:41:33Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
periodic-ci-openshift-release-main-nightly-4.20-e2e-aws-ovn-runc (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#2046036393406238720junit18 hours ago
2026-04-20T02:09:27Z node/ip-10-0-125-142.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ngc4fqyi-152a2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-125-142.us-east-2.compute.internal?timeout=10s - read tcp 10.0.125.142:41128->10.0.127.208:6443: read: connection reset by peer
2026-04-20T02:09:31Z node/ip-10-0-28-31.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ngc4fqyi-152a2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-28-31.us-east-2.compute.internal?timeout=10s - read tcp 10.0.28.31:49028->10.0.127.208:6443: read: connection reset by peer
2026-04-20T02:09:40Z node/ip-10-0-4-39.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ngc4fqyi-152a2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-4-39.us-east-2.compute.internal?timeout=10s - read tcp 10.0.4.39:46242->10.0.53.200:6443: read: connection reset by peer
periodic-ci-openshift-hypershift-release-4.20-periodics-e2e-aws-ovn-conformance-serial (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#2046031196428701696junit18 hours ago
Apr 20 03:06:50.113 - 999ms E backend-disruption-name/openshift-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/368c4b82-500b-45d6-a6e5-8fbb518563f1 backend-disruption-name/openshift-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://ab446d3f7f6cb4360b48834987bf027e-e9bb3b2e3dcb38c4.elb.us-east-1.amazonaws.com:6443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams": dial tcp 3.214.7.180:6443: connect: connection refused
Apr 20 03:06:52.112 - 1s    E backend-disruption-name/openshift-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/277ff6a6-0777-4a0a-8f9b-b3b6fa576487 backend-disruption-name/openshift-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: read tcp 10.131.54.8:41532->34.196.82.240:6443: read: connection reset by peer
Apr 20 03:06:55.112 - 1s    E backend-disruption-name/openshift-api-new-connections connection/new disruption/openshift-tests reason/DisruptionBegan request-audit-id/c77e6e37-2f49-44d7-855f-351ba36ffe4f backend-disruption-name/openshift-api-new-connections connection/new disruption/openshift-tests stopped responding to GET requests over new connections: Get "https://ab446d3f7f6cb4360b48834987bf027e-e9bb3b2e3dcb38c4.elb.us-east-1.amazonaws.com:6443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams": dial tcp 34.196.82.240:6443: connect: connection refused
periodic-ci-openshift-release-main-nightly-4.21-e2e-metal-ovn-two-node-arbiter-upgrade-workers (all) - 2 runs, 0% failed, 50% of runs match
#2046027082063941632junit18 hours ago
2026-04-20T03:04:30Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:33806->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-arbiter-techpreview (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#2046002688247730176junit18 hours ago
2026-04-20T01:05:41Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - read tcp 192.168.111.22:36780->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-5.0-e2e-metal-ovn-two-node-arbiter-upgrade (all) - 2 runs, 0% failed, 50% of runs match
#2046016515555201024junit18 hours ago
2026-04-20T02:53:05Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - read tcp 192.168.111.22:45824->192.168.111.5:6443: read: connection reset by peer
2026-04-20T02:53:06Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:58954->192.168.111.5:6443: read: connection reset by peer
2026-04-20T02:59:28Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - read tcp 192.168.111.22:45646->192.168.111.5:6443: read: connection reset by peer
2026-04-20T02:59:35Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:34894->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.17-upgrade-from-stable-4.16-e2e-metal-ipi-ovn-upgrade-network-flow-matrix (all) - 2 runs, 0% failed, 50% of runs match
#2046001013868990464junit18 hours ago
2026-04-20T00:32:45Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - dial tcp 192.168.111.5:6443: connect: connection refused
2026-04-20T00:36:49Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:37796->192.168.111.5:6443: read: connection reset by peer
2026-04-20T00:36:58Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046001013868990464junit18 hours ago
2026-04-20T00:57:28Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - write tcp 192.168.111.22:53212->192.168.111.5:6443: write: connection reset by peer
2026-04-20T00:57:35Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:59346->192.168.111.5:6443: read: connection reset by peer
2026-04-20T00:57:38Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046001013868990464junit18 hours ago
2026-04-20T01:46:47Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-20T01:46:58Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:36138->192.168.111.5:6443: read: connection reset by peer
2026-04-20T01:46:58Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:53482->192.168.111.5:6443: read: connection reset by peer
2026-04-20T01:46:58Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - http2: client connection force closed via ClientConn.Close
2026-04-20T01:46:59Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - write tcp 192.168.111.21:47216->192.168.111.5:6443: write: connection reset by peer
2026-04-20T01:47:00Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:48154->192.168.111.5:6443: read: connection reset by peer
#2046001013868990464junit18 hours ago
2026-04-20T00:32:45Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - dial tcp 192.168.111.5:6443: connect: connection refused
2026-04-20T00:36:49Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:37796->192.168.111.5:6443: read: connection reset by peer
2026-04-20T00:36:58Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-ci-4.20-e2e-vsphere-ovn-upgrade (all) - 7 runs, 0% failed, 57% of runs match
#2046033447922700288junit18 hours ago
2026-04-20T02:48:38Z node/ci-op-ksv8z2nk-5e1c5-gwhtk-worker-0-np9dp - reason/FailedToUpdateLease https://api-int.ci-op-ksv8z2nk-5e1c5.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-ksv8z2nk-5e1c5-gwhtk-worker-0-np9dp?timeout=10s - read tcp 10.38.84.91:50056->10.38.84.8:6443: read: connection reset by peer
2026-04-20T02:48:40Z node/ci-op-ksv8z2nk-5e1c5-gwhtk-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-ksv8z2nk-5e1c5.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-ksv8z2nk-5e1c5-gwhtk-master-1?timeout=10s - read tcp 10.38.84.85:57496->10.38.84.8:6443: read: connection reset by peer
2026-04-20T02:48:42Z node/ci-op-ksv8z2nk-5e1c5-gwhtk-worker-0-xw75g - reason/FailedToUpdateLease https://api-int.ci-op-ksv8z2nk-5e1c5.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-ksv8z2nk-5e1c5-gwhtk-worker-0-xw75g?timeout=10s - read tcp 10.38.84.90:50558->10.38.84.8:6443: read: connection reset by peer
#2045949653236584448junit23 hours ago
2026-04-19T21:32:34Z node/ci-op-qsddz7hj-5e1c5-rrcjm-worker-0-rkdpx - reason/FailedToUpdateLease https://api-int.ci-op-qsddz7hj-5e1c5.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-qsddz7hj-5e1c5-rrcjm-worker-0-rkdpx?timeout=10s - read tcp 10.93.157.73:37028->10.93.157.16:6443: read: connection reset by peer
2026-04-19T21:32:34Z node/ci-op-qsddz7hj-5e1c5-rrcjm-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-qsddz7hj-5e1c5.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-qsddz7hj-5e1c5-rrcjm-master-2?timeout=10s - read tcp 10.93.157.64:58284->10.93.157.16:6443: read: connection reset by peer
2026-04-19T21:38:49Z node/ci-op-qsddz7hj-5e1c5-rrcjm-worker-0-rkdpx - reason/FailedToUpdateLease https://api-int.ci-op-qsddz7hj-5e1c5.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-qsddz7hj-5e1c5-rrcjm-worker-0-rkdpx?timeout=10s - read tcp 10.93.157.73:45784->10.93.157.16:6443: read: connection reset by peer
2026-04-19T21:38:54Z node/ci-op-qsddz7hj-5e1c5-rrcjm-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-qsddz7hj-5e1c5.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-qsddz7hj-5e1c5-rrcjm-master-0?timeout=10s - read tcp 10.93.157.66:52428->10.93.157.16:6443: read: connection reset by peer
#2045811613113520128junit33 hours ago
2026-04-19T11:52:04Z node/ci-op-qi007l3x-5e1c5-jp68v-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-qi007l3x-5e1c5.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-qi007l3x-5e1c5-jp68v-master-2?timeout=10s - read tcp 10.93.251.33:41556->10.93.251.12:6443: read: connection reset by peer
2026-04-19T11:52:07Z node/ci-op-qi007l3x-5e1c5-jp68v-worker-0-tsk2c - reason/FailedToUpdateLease https://api-int.ci-op-qi007l3x-5e1c5.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-qi007l3x-5e1c5-jp68v-worker-0-tsk2c?timeout=10s - read tcp 10.93.251.38:50786->10.93.251.12:6443: read: connection reset by peer
#2045679505573416960junit42 hours ago
Apr 19 03:12:32.002 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=Namespace) /openshift-network-console: failed to apply / update (/v1, Kind=Namespace) /openshift-network-console: Patch "https://api-int.ci-op-vsmqit7h-5e1c5.vmc-ci.devcluster.openshift.com:6443/api/v1/namespaces/openshift-network-console?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.38.84.102:48426->10.38.84.6:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 19 03:12:32.002 - 0ms   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=Namespace) /openshift-network-console: failed to apply / update (/v1, Kind=Namespace) /openshift-network-console: Patch "https://api-int.ci-op-vsmqit7h-5e1c5.vmc-ci.devcluster.openshift.com:6443/api/v1/namespaces/openshift-network-console?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.38.84.102:48426->10.38.84.6:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 19 03:12:32.003 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2045679505573416960junit42 hours ago
2026-04-19T02:59:09Z node/ci-op-vsmqit7h-5e1c5-vx2p7-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-vsmqit7h-5e1c5.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-vsmqit7h-5e1c5-vx2p7-master-2?timeout=10s - write tcp 10.38.84.104:56252->10.38.84.6:6443: write: connection reset by peer
2026-04-19T02:59:09Z node/ci-op-vsmqit7h-5e1c5-vx2p7-worker-0-vl4lg - reason/FailedToUpdateLease https://api-int.ci-op-vsmqit7h-5e1c5.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-vsmqit7h-5e1c5-vx2p7-worker-0-vl4lg?timeout=10s - read tcp 10.38.84.107:59754->10.38.84.6:6443: read: connection reset by peer
2026-04-19T02:59:12Z node/ci-op-vsmqit7h-5e1c5-vx2p7-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-vsmqit7h-5e1c5.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-vsmqit7h-5e1c5-vx2p7-master-1?timeout=10s - read tcp 10.38.84.102:57574->10.38.84.6:6443: read: connection reset by peer
2026-04-19T03:12:03Z node/ci-op-vsmqit7h-5e1c5-vx2p7-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-vsmqit7h-5e1c5.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-vsmqit7h-5e1c5-vx2p7-master-0?timeout=10s - read tcp 10.38.84.103:33862->10.38.84.6:6443: read: connection reset by peer
2026-04-19T03:12:10Z node/ci-op-vsmqit7h-5e1c5-vx2p7-worker-0-vl4lg - reason/FailedToUpdateLease https://api-int.ci-op-vsmqit7h-5e1c5.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-vsmqit7h-5e1c5-vx2p7-worker-0-vl4lg?timeout=10s - read tcp 10.38.84.107:50198->10.38.84.6:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-5.0-e2e-metal-ovn-two-node-arbiter-upgrade-workers (all) - 2 runs, 0% failed, 50% of runs match
#2046021043264425984junit18 hours ago
2026-04-20T02:41:20Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:54704->192.168.111.5:6443: read: connection reset by peer
2026-04-20T02:41:24Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:59756->192.168.111.5:6443: read: connection reset by peer
2026-04-20T02:41:27Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:34058->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.16-aws-ovn-network-flow-matrix (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#2046000999629328384junit19 hours ago
2026-04-19T23:48:50Z node/ip-10-0-9-191.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48cd8rb9-eeb18.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-191.us-east-2.compute.internal?timeout=10s - read tcp 10.0.9.191:50010->10.0.94.27:6443: read: connection reset by peer
2026-04-19T23:54:00Z node/ip-10-0-106-83.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48cd8rb9-eeb18.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-106-83.us-east-2.compute.internal?timeout=10s - read tcp 10.0.106.83:33480->10.0.94.27:6443: read: connection reset by peer
2026-04-19T23:56:54Z node/ip-10-0-96-88.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48cd8rb9-eeb18.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-96-88.us-east-2.compute.internal?timeout=10s - read tcp 10.0.96.88:40766->10.0.25.147:6443: read: connection reset by peer
2026-04-20T01:06:14Z node/ip-10-0-109-226.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48cd8rb9-eeb18.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-226.us-east-2.compute.internal?timeout=10s - read tcp 10.0.109.226:50252->10.0.94.27:6443: read: connection reset by peer
2026-04-20T01:10:08Z node/ip-10-0-86-137.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48cd8rb9-eeb18.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-86-137.us-east-2.compute.internal?timeout=10s - read tcp 10.0.86.137:58918->10.0.94.27:6443: read: connection reset by peer
2026-04-20T01:10:13Z node/ip-10-0-96-88.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48cd8rb9-eeb18.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-96-88.us-east-2.compute.internal?timeout=10s - read tcp 10.0.96.88:55044->10.0.94.27:6443: read: connection reset by peer
2026-04-20T01:14:38Z node/ip-10-0-106-83.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48cd8rb9-eeb18.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-106-83.us-east-2.compute.internal?timeout=10s - read tcp 10.0.106.83:38828->10.0.25.147:6443: read: connection reset by peer
2026-04-20T01:22:21Z node/ip-10-0-9-191.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-48cd8rb9-eeb18.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-191.us-east-2.compute.internal?timeout=10s - read tcp 10.0.9.191:58086->10.0.25.147:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.17-e2e-vsphere-ovn-upgrade (all) - 2 runs, 0% failed, 100% of runs match
#2046030353415540736junit19 hours ago
2026-04-20T02:38:21Z node/ci-op-cbxn7m0w-c6c91-d9hm9-worker-0-d94gb - reason/FailedToUpdateLease https://api-int.ci-op-cbxn7m0w-c6c91.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-cbxn7m0w-c6c91-d9hm9-worker-0-d94gb?timeout=10s - read tcp 10.38.84.89:36204->10.38.84.4:6443: read: connection reset by peer
2026-04-20T02:38:25Z node/ci-op-cbxn7m0w-c6c91-d9hm9-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-cbxn7m0w-c6c91.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-cbxn7m0w-c6c91-d9hm9-master-2?timeout=10s - write tcp 10.38.84.81:44700->10.38.84.4:6443: write: broken pipe
#2045667961716346880junit42 hours ago
2026-04-19T02:40:40Z node/ci-op-hns8z4yk-c6c91-qt4zp-worker-0-gb4lg - reason/FailedToUpdateLease https://api-int.ci-op-hns8z4yk-c6c91.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-hns8z4yk-c6c91-qt4zp-worker-0-gb4lg?timeout=10s - dial tcp 10.38.84.10:6443: connect: connection refused
2026-04-19T02:40:54Z node/ci-op-hns8z4yk-c6c91-qt4zp-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-hns8z4yk-c6c91.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-hns8z4yk-c6c91-qt4zp-master-2?timeout=10s - read tcp 10.38.84.86:39460->10.38.84.10:6443: read: connection reset by peer
2026-04-19T02:41:04Z node/ci-op-hns8z4yk-c6c91-qt4zp-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-hns8z4yk-c6c91.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-hns8z4yk-c6c91-qt4zp-master-0?timeout=10s - context deadline exceeded
periodic-ci-openshift-multiarch-main-nightly-4.19-ocp-e2e-upgrade-aws-ovn-arm64 (all) - 1 runs, 0% failed, 100% of runs match
#2046010473102446592junit19 hours ago
2026-04-20T00:35:57Z node/ip-10-0-22-23.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1k828gwb-93d92.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-23.us-east-2.compute.internal?timeout=10s - read tcp 10.0.22.23:40574->10.0.29.175:6443: read: connection reset by peer
2026-04-20T00:50:37Z node/ip-10-0-22-23.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1k828gwb-93d92.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-23.us-east-2.compute.internal?timeout=10s - read tcp 10.0.22.23:41388->10.0.29.175:6443: read: connection reset by peer
2026-04-20T01:23:02Z node/ip-10-0-22-23.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1k828gwb-93d92.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-23.us-east-2.compute.internal?timeout=10s - read tcp 10.0.22.23:41446->10.0.29.175:6443: read: connection reset by peer
2026-04-20T01:23:02Z node/ip-10-0-122-81.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1k828gwb-93d92.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-122-81.us-east-2.compute.internal?timeout=10s - read tcp 10.0.122.81:51498->10.0.29.175:6443: read: connection reset by peer
#2046010473102446592junit19 hours ago
2026-04-20T00:35:57Z node/ip-10-0-22-23.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1k828gwb-93d92.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-23.us-east-2.compute.internal?timeout=10s - read tcp 10.0.22.23:40574->10.0.29.175:6443: read: connection reset by peer
2026-04-20T00:50:37Z node/ip-10-0-22-23.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1k828gwb-93d92.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-23.us-east-2.compute.internal?timeout=10s - read tcp 10.0.22.23:41388->10.0.29.175:6443: read: connection reset by peer
2026-04-20T01:23:02Z node/ip-10-0-22-23.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1k828gwb-93d92.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-23.us-east-2.compute.internal?timeout=10s - read tcp 10.0.22.23:41446->10.0.29.175:6443: read: connection reset by peer
2026-04-20T01:23:02Z node/ip-10-0-122-81.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1k828gwb-93d92.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-122-81.us-east-2.compute.internal?timeout=10s - read tcp 10.0.122.81:51498->10.0.29.175:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.18-upgrade-from-stable-4.17-e2e-aws-cgroupsv1-upgrade (all) - 2 runs, 0% failed, 100% of runs match
#2046001160824819712junit19 hours ago
2026-04-20T00:05:40Z node/ip-10-0-82-250.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3zhhp7bi-f86c0.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-82-250.us-west-2.compute.internal?timeout=10s - read tcp 10.0.82.250:52168->10.0.89.81:6443: read: connection reset by peer
2026-04-20T00:05:52Z node/ip-10-0-126-119.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3zhhp7bi-f86c0.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-126-119.us-west-2.compute.internal?timeout=10s - read tcp 10.0.126.119:45194->10.0.43.44:6443: read: connection reset by peer
2026-04-20T00:41:32Z node/ip-10-0-84-21.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3zhhp7bi-f86c0.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-84-21.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2046001160824819712junit19 hours ago
2026-04-20T00:05:40Z node/ip-10-0-82-250.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3zhhp7bi-f86c0.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-82-250.us-west-2.compute.internal?timeout=10s - read tcp 10.0.82.250:52168->10.0.89.81:6443: read: connection reset by peer
2026-04-20T00:05:52Z node/ip-10-0-126-119.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3zhhp7bi-f86c0.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-126-119.us-west-2.compute.internal?timeout=10s - read tcp 10.0.126.119:45194->10.0.43.44:6443: read: connection reset by peer
2026-04-20T00:41:32Z node/ip-10-0-84-21.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-3zhhp7bi-f86c0.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-84-21.us-west-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045638769129820160junit43 hours ago
2026-04-19T00:03:21Z node/ip-10-0-110-186.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-9yzbkhbj-f86c0.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-110-186.ec2.internal?timeout=10s - read tcp 10.0.110.186:33186->10.0.74.59:6443: read: connection reset by peer
#2045638769129820160junit43 hours ago
2026-04-19T00:03:21Z node/ip-10-0-110-186.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-9yzbkhbj-f86c0.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-110-186.ec2.internal?timeout=10s - read tcp 10.0.110.186:33186->10.0.74.59:6443: read: connection reset by peer
#2045638769129820160junit43 hours ago
Apr 19 01:02:21.049 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited: Patch "https://api-int.ci-op-9yzbkhbj-f86c0.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/openshift-ovn-kubernetes-node-limited?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.39.37:33454->10.0.74.59:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 19 01:02:21.049 - 27s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited: Patch "https://api-int.ci-op-9yzbkhbj-f86c0.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/openshift-ovn-kubernetes-node-limited?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.39.37:33454->10.0.74.59:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38668)
Apr 19 01:02:48.256 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
periodic-ci-openshift-release-main-nightly-4.22-e2e-azure-ovn-multidisk-techpreview (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#2045987319822946304junit19 hours ago
    <*fmt.wrapError | 0xc009ffdb60>:
    error reading from error stream: next reader: read tcp 172.24.235.242:42988->168.62.199.180:6443: read: connection reset by peer
    {
        msg: "error reading from error stream: next reader: read tcp 172.24.235.242:42988->168.62.199.180:6443: read: connection reset by peer",
        err: <*fmt.wrapError | 0xc00a11a820>{
            msg: "next reader: read tcp 172.24.235.242:42988->168.62.199.180:6443: read: connection reset by peer",
            err: <*net.OpError | 0xc009c1b5e0>{
periodic-ci-openshift-multiarch-main-nightly-4.21-upgrade-from-stable-4.20-ocp-e2e-upgrade-gcp-ovn-multi-a-a (all) - 6 runs, 67% failed, 25% of failures match = 17% impact
#2045988296118505472junit19 hours ago
    Told to stop trying after 0.047s.
    Unexpected final error while getting *v1.Pod: unexpected error when reading response body. Please retry. Original error: read tcp 10.129.174.220:45720->34.120.73.168:6443: read: connection reset by peer
    {
        msg: "Told to stop trying after 0.047s.\nUnexpected final error while getting *v1.Pod: unexpected error when reading response body. Please retry. Original error: read tcp 10.129.174.220:45720->34.120.73.168:6443: read: connection reset by peer",
        fullStackTrace: "k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodRunningInNamespace({0x9b366d8, 0xf638ee0}, {0x9bc2d08?, 0xc00269c1c0?}, {0x8c73dda, 0xf}, {0xc000e8dec0, 0x2a}, 0x45d964b800)\n\tk8s.io/kubernetes@v1.34.1/test/e2e/framework/pod/wait.go:586 +0x52c\nk8s.io/kubernetes/test/e2e/framework/pod.WaitForPodNameRunningInNamespace(...)\n\tk8s.io/kubernetes@v1.34.1/test/e2e/framework/pod/wait.go:570\ngithub.com/openshift/origin/test/extended/router.init.func3.1()\n\tgithub.com/openshift/origin/test/extended/router/external_certificate.go:71 +0x44b",
periodic-ci-openshift-release-main-nightly-4.21-e2e-metal-ipi-ovn-upgrade (all) - 6 runs, 17% failed, 400% of failures match = 67% impact
#2045980392548208640junit20 hours ago
2026-04-19T23:49:06Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:58316->192.168.111.5:6443: read: connection reset by peer
2026-04-19T23:49:08Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:58030->192.168.111.5:6443: read: connection reset by peer
2026-04-19T23:49:08Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:37676->192.168.111.5:6443: read: connection reset by peer
#2045980392548208640junit20 hours ago
2026-04-19T23:49:06Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:58316->192.168.111.5:6443: read: connection reset by peer
2026-04-19T23:49:08Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:58030->192.168.111.5:6443: read: connection reset by peer
2026-04-19T23:49:08Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:37676->192.168.111.5:6443: read: connection reset by peer
#2045873483497345024junit27 hours ago
2026-04-19T16:40:53Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2026-04-19T16:43:14Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:39354->192.168.111.5:6443: read: connection reset by peer
2026-04-19T16:43:15Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:36880->192.168.111.5:6443: read: connection reset by peer
2026-04-19T16:43:18Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:33670->192.168.111.5:6443: read: connection reset by peer
2026-04-19T16:49:32Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:34536->192.168.111.5:6443: read: connection reset by peer
2026-04-19T16:49:34Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - write tcp 192.168.111.21:51782->192.168.111.5:6443: write: connection reset by peer
2026-04-19T16:49:38Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:36074->192.168.111.5:6443: read: connection reset by peer
2026-04-19T16:49:40Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:33016->192.168.111.5:6443: read: connection reset by peer
#2045715799590572032junit37 hours ago
2026-04-19T06:30:59Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T07:00:23Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:38988->192.168.111.5:6443: read: connection reset by peer
2026-04-19T07:00:24Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:39660->192.168.111.5:6443: read: connection reset by peer
2026-04-19T07:00:27Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:46048->192.168.111.5:6443: read: connection reset by peer
2026-04-19T07:06:11Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:39920->192.168.111.5:6443: read: connection reset by peer
2026-04-19T07:06:13Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:46128->192.168.111.5:6443: read: connection reset by peer
2026-04-19T07:06:20Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:37746->192.168.111.5:6443: read: connection reset by peer
#2045599554799144960junit45 hours ago
2026-04-18T22:38:23Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:45036->192.168.111.5:6443: read: connection reset by peer
2026-04-18T22:38:23Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:52330->192.168.111.5:6443: read: connection reset by peer
2026-04-18T22:38:28Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:48198->192.168.111.5:6443: read: connection reset by peer
2026-04-18T22:38:29Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:37874->192.168.111.5:6443: read: connection reset by peer
#2045599554799144960junit45 hours ago
2026-04-18T22:38:23Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:45036->192.168.111.5:6443: read: connection reset by peer
2026-04-18T22:38:23Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:52330->192.168.111.5:6443: read: connection reset by peer
2026-04-18T22:38:28Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:48198->192.168.111.5:6443: read: connection reset by peer
2026-04-18T22:38:29Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:37874->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.14-upgrade-from-stable-4.13-e2e-vsphere-ovn-upgrade-storage-data (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2046005438989733888junit20 hours ago
2026-04-20T01:17:04Z node/ci-op-w56qwxqm-4fb92-46zmp-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-w56qwxqm-4fb92.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-w56qwxqm-4fb92-46zmp-master-2?timeout=10s - dial tcp 10.93.251.14:6443: connect: connection refused
2026-04-20T01:17:13Z node/ci-op-w56qwxqm-4fb92-46zmp-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-w56qwxqm-4fb92.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-w56qwxqm-4fb92-46zmp-master-1?timeout=10s - read tcp 10.93.251.63:60394->10.93.251.14:6443: read: connection reset by peer
2026-04-20T01:17:31Z node/ci-op-w56qwxqm-4fb92-46zmp-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-w56qwxqm-4fb92.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-w56qwxqm-4fb92-46zmp-master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-nightly-4.18-upgrade-from-stable-4.17-e2e-metal-ipi-ovn-upgrade-network-flow-matrix (all) - 2 runs, 0% failed, 100% of runs match
#2045970801173204992junit20 hours ago
2026-04-20T00:01:14Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-20T00:01:18Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:58980->192.168.111.5:6443: read: connection reset by peer
2026-04-20T00:01:18Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:44978->192.168.111.5:6443: read: connection reset by peer
2026-04-20T00:08:31Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - dial tcp 192.168.111.5:6443: connect: connection refused
#2045970801173204992junit20 hours ago
2026-04-20T00:01:14Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-20T00:01:18Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:58980->192.168.111.5:6443: read: connection reset by peer
2026-04-20T00:01:18Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:44978->192.168.111.5:6443: read: connection reset by peer
2026-04-20T00:08:31Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - dial tcp 192.168.111.5:6443: connect: connection refused
#2045608428159635456junit44 hours ago
2026-04-18T23:51:22Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-18T23:51:30Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:34830->192.168.111.5:6443: read: connection reset by peer
2026-04-18T23:51:34Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:58772->192.168.111.5:6443: read: connection reset by peer
2026-04-18T23:58:14Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
#2045608428159635456junit44 hours ago
2026-04-18T23:58:14Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-18T23:58:23Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:53304->192.168.111.5:6443: read: connection reset by peer
2026-04-18T23:58:23Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:57630->192.168.111.5:6443: read: connection reset by peer
2026-04-18T23:58:24Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:41224->192.168.111.5:6443: read: connection reset by peer
#2045608428159635456junit44 hours ago
2026-04-18T23:51:22Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-18T23:51:30Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:34830->192.168.111.5:6443: read: connection reset by peer
2026-04-18T23:51:34Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:58772->192.168.111.5:6443: read: connection reset by peer
2026-04-18T23:58:14Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
periodic-ci-openshift-release-main-nightly-5.0-e2e-metal-ovn-two-node-arbiter-upgrade-day-2-workers (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#2045985925342695424junit20 hours ago
2026-04-20T00:55:04Z node/extraworker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/extraworker-0?timeout=10s - read tcp 192.168.111.23:45774->192.168.111.5:6443: read: connection reset by peer
2026-04-20T00:55:04Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - read tcp 192.168.111.22:33860->192.168.111.5:6443: read: connection reset by peer
2026-04-20T00:55:07Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - write tcp 192.168.111.20:54826->192.168.111.5:6443: write: connection reset by peer
2026-04-20T00:55:08Z node/extraworker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/extraworker-1?timeout=10s - read tcp 192.168.111.24:51036->192.168.111.5:6443: read: connection reset by peer
2026-04-20T01:02:54Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:56332->192.168.111.5:6443: read: connection reset by peer
2026-04-20T01:02:55Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - read tcp 192.168.111.22:36864->192.168.111.5:6443: read: connection reset by peer
2026-04-20T01:02:56Z node/extraworker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/extraworker-0?timeout=10s - read tcp 192.168.111.23:36522->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.20-e2e-aws-ovn-serial-ipsec (all) - 2 runs, 50% failed, 200% of failures match = 100% impact
#2045993360035942400junit20 hours ago
2026-04-19T23:22:15Z node/ip-10-0-21-176.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-blriljn2-ce422.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-21-176.us-east-2.compute.internal?timeout=10s - read tcp 10.0.21.176:59898->10.0.35.127:6443: read: connection reset by peer
2026-04-19T23:22:22Z node/ip-10-0-75-189.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-blriljn2-ce422.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-75-189.us-east-2.compute.internal?timeout=10s - read tcp 10.0.75.189:58624->10.0.98.131:6443: read: connection reset by peer
#2045993360035942400junit20 hours ago
2026-04-19T23:22:15Z node/ip-10-0-21-176.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-blriljn2-ce422.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-21-176.us-east-2.compute.internal?timeout=10s - read tcp 10.0.21.176:59898->10.0.35.127:6443: read: connection reset by peer
2026-04-19T23:22:22Z node/ip-10-0-75-189.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-blriljn2-ce422.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-75-189.us-east-2.compute.internal?timeout=10s - read tcp 10.0.75.189:58624->10.0.98.131:6443: read: connection reset by peer
#2045993360035942400junit20 hours ago
2026-04-19T23:22:15Z node/ip-10-0-21-176.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-blriljn2-ce422.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-21-176.us-east-2.compute.internal?timeout=10s - read tcp 10.0.21.176:59898->10.0.35.127:6443: read: connection reset by peer
2026-04-19T23:22:22Z node/ip-10-0-75-189.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-blriljn2-ce422.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-75-189.us-east-2.compute.internal?timeout=10s - read tcp 10.0.75.189:58624->10.0.98.131:6443: read: connection reset by peer
#2045630972715601920junit45 hours ago
2026-04-18T23:20:25Z node/ip-10-0-52-17.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-b0gnc01x-ce422.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-17.us-west-1.compute.internal?timeout=10s - read tcp 10.0.52.17:51088->10.0.122.57:6443: read: connection reset by peer
2026-04-18T23:24:23Z node/ip-10-0-61-174.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-b0gnc01x-ce422.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-174.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045630972715601920junit45 hours ago
2026-04-18T23:24:25Z node/ip-10-0-56-81.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-b0gnc01x-ce422.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-56-81.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-18T23:24:25Z node/ip-10-0-56-81.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-b0gnc01x-ce422.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-56-81.us-west-1.compute.internal?timeout=10s - read tcp 10.0.56.81:47186->10.0.122.57:6443: read: connection reset by peer
2026-04-18T23:24:25Z node/ip-10-0-94-49.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-b0gnc01x-ce422.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-94-49.us-west-1.compute.internal?timeout=10s - context deadline exceeded
2026-04-18T23:24:26Z node/ip-10-0-70-153.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-b0gnc01x-ce422.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-70-153.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-18T23:24:29Z node/ip-10-0-61-174.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-b0gnc01x-ce422.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-174.us-west-1.compute.internal?timeout=10s - read tcp 10.0.61.174:60140->10.0.5.70:6443: read: connection reset by peer
2026-04-18T23:24:34Z node/ip-10-0-34-47.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-b0gnc01x-ce422.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-34-47.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-nightly-5.0-e2e-vsphere-ovn-upi-hybrid-env-techpreview (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#2046030102034124800junit20 hours ago
# step graph.Run multi-stage test e2e-vsphere-ovn-upi-hybrid-env-techpreview - e2e-vsphere-ovn-upi-hybrid-env-techpreview-gather-must-gather container test
0.44:60890->10.5.183.2:6443: read: connection reset by peer"
error completing cluster type inspection: error running backup collection: Get "https://api.ci-op-lbbsrrfl-f8d36.vmc-ci.devcluster.openshift.com:6443/api?timeout=32s": EOF - error from a previous attempt: read tcp 10.128.0.44:60890->10.5.183.2:6443: read: connection reset by peer
Falling back to `oc adm inspect namespace/openshift-cluster-version` to collect basic cluster named resources.
E0420 01:05:00.190762     110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://api.ci-op-lbbsrrfl-f8d36.vmc-ci.devcluster.openshift.com:6443/api?timeout=32s\": EOF"
E0420 01:05:10.256100     110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://api.ci-op-lbbsrrfl-f8d36.vmc-ci.devcluster.openshift.com:6443/api?timeout=32s\": read tcp 10.128.0.44:58868->10.5.183.2:6443: read: connection reset by peer - error from a previous attempt: EOF"
W0420 01:05:12.395344     110 util.go:147] the server doesn't have a resource type egressfirewalls, skipping the inspection
#2046030102034124800junit20 hours ago
---
0.44:60890->10.5.183.2:6443: read: connection reset by peer"
error completing cluster type inspection: error running backup collection: Get "https://api.ci-op-lbbsrrfl-f8d36.vmc-ci.devcluster.openshift.com:6443/api?timeout=32s": EOF - error from a previous attempt: read tcp 10.128.0.44:60890->10.5.183.2:6443: read: connection reset by peer
Falling back to `oc adm inspect namespace/openshift-cluster-version` to collect basic cluster named resources.
E0420 01:05:00.190762     110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://api.ci-op-lbbsrrfl-f8d36.vmc-ci.devcluster.openshift.com:6443/api?timeout=32s\": EOF"
E0420 01:05:10.256100     110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://api.ci-op-lbbsrrfl-f8d36.vmc-ci.devcluster.openshift.com:6443/api?timeout=32s\": read tcp 10.128.0.44:58868->10.5.183.2:6443: read: connection reset by peer - error from a previous attempt: EOF"
W0420 01:05:12.395344     110 util.go:147] the server doesn't have a resource type egressfirewalls, skipping the inspection
periodic-ci-openshift-release-main-nightly-4.20-e2e-aws-ovn-upgrade-ipsec (all) - 2 runs, 0% failed, 100% of runs match
#2045994869905690624junit21 hours ago
2026-04-19T23:28:02Z node/ip-10-0-51-93.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-tvrlhb5h-2b3d5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-93.us-west-2.compute.internal?timeout=10s - read tcp 10.0.51.93:36184->10.0.33.106:6443: read: connection reset by peer
2026-04-19T23:31:43Z node/ip-10-0-67-87.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-tvrlhb5h-2b3d5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-67-87.us-west-2.compute.internal?timeout=10s - read tcp 10.0.67.87:42794->10.0.33.106:6443: read: connection reset by peer
2026-04-19T23:31:46Z node/ip-10-0-26-34.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-tvrlhb5h-2b3d5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-26-34.us-west-2.compute.internal?timeout=10s - read tcp 10.0.26.34:58628->10.0.33.106:6443: read: connection reset by peer
2026-04-19T23:53:00Z node/ip-10-0-6-91.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-tvrlhb5h-2b3d5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-91.us-west-2.compute.internal?timeout=10s - read tcp 10.0.6.91:58716->10.0.33.106:6443: read: connection reset by peer
2026-04-19T23:56:55Z node/ip-10-0-11-168.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-tvrlhb5h-2b3d5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-11-168.us-west-2.compute.internal?timeout=10s - read tcp 10.0.11.168:45576->10.0.80.49:6443: read: connection reset by peer
2026-04-20T00:00:40Z node/ip-10-0-6-91.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-tvrlhb5h-2b3d5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-91.us-west-2.compute.internal?timeout=10s - read tcp 10.0.6.91:40664->10.0.80.49:6443: read: connection reset by peer
2026-04-20T00:13:47Z node/ip-10-0-127-27.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-tvrlhb5h-2b3d5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-127-27.us-west-2.compute.internal?timeout=10s - read tcp 10.0.127.27:58744->10.0.80.49:6443: read: connection reset by peer
#2045632478223273984junit45 hours ago
2026-04-18T23:30:14Z node/ip-10-0-114-116.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-r8zb7i57-2b3d5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-114-116.us-west-2.compute.internal?timeout=10s - read tcp 10.0.114.116:46092->10.0.48.141:6443: read: connection reset by peer
2026-04-18T23:34:19Z node/ip-10-0-16-168.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-r8zb7i57-2b3d5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-168.us-west-2.compute.internal?timeout=10s - read tcp 10.0.16.168:50352->10.0.99.200:6443: read: connection reset by peer
2026-04-18T23:34:22Z node/ip-10-0-45-166.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-r8zb7i57-2b3d5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-45-166.us-west-2.compute.internal?timeout=10s - read tcp 10.0.45.166:33524->10.0.48.141:6443: read: connection reset by peer
2026-04-18T23:57:27Z node/ip-10-0-4-142.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-r8zb7i57-2b3d5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-4-142.us-west-2.compute.internal?timeout=10s - read tcp 10.0.4.142:56494->10.0.48.141:6443: read: connection reset by peer
2026-04-19T00:18:20Z node/ip-10-0-4-142.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-r8zb7i57-2b3d5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-4-142.us-west-2.compute.internal?timeout=10s - read tcp 10.0.4.142:55358->10.0.48.141:6443: read: connection reset by peer
2026-04-19T00:18:28Z node/ip-10-0-45-166.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-r8zb7i57-2b3d5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-45-166.us-west-2.compute.internal?timeout=10s - read tcp 10.0.45.166:49086->10.0.99.200:6443: read: connection reset by peer
2026-04-19T00:21:52Z node/ip-10-0-16-168.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-r8zb7i57-2b3d5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-168.us-west-2.compute.internal?timeout=10s - context deadline exceeded
periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-fencing-ipv6-recovery (all) - 4 runs, 50% failed, 100% of failures match = 50% impact
#2045960392127025152junit21 hours ago
2026-04-19T22:12:58Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - http2: client connection lost
2026-04-19T22:51:49Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::15]:49794->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-19T22:51:59Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045960392127025152junit21 hours ago
2026-04-19T22:55:28Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - context deadline exceeded
2026-04-19T23:09:06Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::15]:56034->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-19T23:09:16Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045809396725846016junit31 hours ago
2026-04-19T12:15:08Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - http2: client connection lost
2026-04-19T12:16:01Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::14]:40050->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-19T12:16:11Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-nightly-4.17-aws-ovn-network-flow-matrix (all) - 2 runs, 0% failed, 100% of runs match
#2045970787768209408junit21 hours ago
2026-04-19T23:15:07Z node/ip-10-0-9-7.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5ixy0vwl-82c38.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-7.ec2.internal?timeout=10s - read tcp 10.0.9.7:59010->10.0.84.162:6443: read: connection reset by peer
2026-04-19T23:31:58Z node/ip-10-0-9-7.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5ixy0vwl-82c38.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-9-7.ec2.internal?timeout=10s - read tcp 10.0.9.7:45428->10.0.12.71:6443: read: connection reset by peer
2026-04-19T23:32:05Z node/ip-10-0-87-109.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5ixy0vwl-82c38.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-87-109.ec2.internal?timeout=10s - read tcp 10.0.87.109:50680->10.0.84.162:6443: read: connection reset by peer
2026-04-19T23:32:30Z node/ip-10-0-70-192.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5ixy0vwl-82c38.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-70-192.ec2.internal?timeout=10s - read tcp 10.0.70.192:39286->10.0.84.162:6443: read: connection reset by peer
#2045608414117105664junit45 hours ago
2026-04-18T23:41:28Z node/ip-10-0-45-156.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-kh26rc3t-82c38.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-45-156.ec2.internal?timeout=10s - read tcp 10.0.45.156:35928->10.0.110.117:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.19-e2e-aws-ovn-kube-apiserver-rollout (all) - 1 runs, 0% failed, 100% of runs match
#2045976246382235648junit22 hours ago
Apr 19 23:20:57.784 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ConfigMap) openshift-multus/whereabouts-flatfile-config: failed to apply / update (/v1, Kind=ConfigMap) openshift-multus/whereabouts-flatfile-config: Patch "https://api-int.ci-op-97xbzbqr-aee74.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-multus/configmaps/whereabouts-flatfile-config?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.77.128:56106->10.0.87.44:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 23:20:57.784 - 27s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ConfigMap) openshift-multus/whereabouts-flatfile-config: failed to apply / update (/v1, Kind=ConfigMap) openshift-multus/whereabouts-flatfile-config: Patch "https://api-int.ci-op-97xbzbqr-aee74.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-multus/configmaps/whereabouts-flatfile-config?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.77.128:56106->10.0.87.44:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 23:21:24.975 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2045976246382235648junit22 hours ago
2026-04-19T22:25:46Z node/ip-10-0-22-85.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-97xbzbqr-aee74.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-85.us-west-2.compute.internal?timeout=10s - read tcp 10.0.22.85:44192->10.0.46.11:6443: read: connection reset by peer
2026-04-19T22:25:56Z node/ip-10-0-22-85.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-97xbzbqr-aee74.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-85.us-west-2.compute.internal?timeout=10s - read tcp 10.0.22.85:44338->10.0.46.11:6443: read: connection reset by peer
2026-04-19T22:29:50Z node/ip-10-0-67-65.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-97xbzbqr-aee74.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-67-65.us-west-2.compute.internal?timeout=10s - read tcp 10.0.67.65:33972->10.0.46.11:6443: read: connection reset by peer
2026-04-19T22:29:51Z node/ip-10-0-79-78.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-97xbzbqr-aee74.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-79-78.us-west-2.compute.internal?timeout=10s - read tcp 10.0.79.78:36792->10.0.46.11:6443: read: connection reset by peer
2026-04-19T22:29:53Z node/ip-10-0-73-131.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-97xbzbqr-aee74.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-73-131.us-west-2.compute.internal?timeout=10s - read tcp 10.0.73.131:38646->10.0.46.11:6443: read: connection reset by peer
2026-04-19T22:30:15Z node/ip-10-0-41-25.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-97xbzbqr-aee74.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-41-25.us-west-2.compute.internal?timeout=10s - read tcp 10.0.41.25:47928->10.0.87.44:6443: read: connection reset by peer
2026-04-19T22:33:45Z node/ip-10-0-22-85.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-97xbzbqr-aee74.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-85.us-west-2.compute.internal?timeout=10s - read tcp 10.0.22.85:50508->10.0.87.44:6443: read: connection reset by peer
2026-04-19T22:38:22Z node/ip-10-0-79-78.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-97xbzbqr-aee74.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-79-78.us-west-2.compute.internal?timeout=10s - read tcp 10.0.79.78:35916->10.0.46.11:6443: read: connection reset by peer
2026-04-19T22:38:24Z node/ip-10-0-73-131.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-97xbzbqr-aee74.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-73-131.us-west-2.compute.internal?timeout=10s - read tcp 10.0.73.131:49088->10.0.46.11:6443: read: connection reset by peer
2026-04-19T22:38:49Z node/ip-10-0-77-128.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-97xbzbqr-aee74.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-77-128.us-west-2.compute.internal?timeout=10s - read tcp 10.0.77.128:50456->10.0.87.44:6443: read: connection reset by peer
2026-04-19T22:42:25Z node/ip-10-0-22-85.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-97xbzbqr-aee74.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-85.us-west-2.compute.internal?timeout=10s - read tcp 10.0.22.85:57754->10.0.46.11:6443: read: connection reset by peer
2026-04-19T22:42:25Z node/ip-10-0-22-85.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-97xbzbqr-aee74.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-22-85.us-west-2.compute.internal?timeout=10s - http2: client connection force closed via ClientConn.Close
periodic-ci-openshift-release-main-ci-5.0-e2e-azure-ovn-upgrade (all) - 80 runs, 29% failed, 4% of failures match = 1% impact
#2045943119538556928junit22 hours ago
    <*fmt.wrapError | 0xc0021907e0>:
    error reading from error stream: next reader: read tcp 172.24.90.9:56348->40.87.24.143:6443: read: connection reset by peer
    {
        msg: "error reading from error stream: next reader: read tcp 172.24.90.9:56348->40.87.24.143:6443: read: connection reset by peer",
        err: <*fmt.wrapError | 0xc00225da00>{
            msg: "next reader: read tcp 172.24.90.9:56348->40.87.24.143:6443: read: connection reset by peer",
            err: <*net.OpError | 0xc00220d590>{
periodic-ci-openshift-release-main-nightly-4.20-upgrade-from-stable-4.19-e2e-metal-ipi-ovn-upgrade-network-flow-matrix (all) - 2 runs, 0% failed, 100% of runs match
#2045940607121100800junit22 hours ago
2026-04-19T21:46:24Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-19T21:46:30Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:51694->192.168.111.5:6443: read: connection reset by peer
2026-04-19T21:46:32Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - write tcp 192.168.111.21:55818->192.168.111.5:6443: write: connection reset by peer
2026-04-19T21:46:35Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:53960->192.168.111.5:6443: read: connection reset by peer
2026-04-19T21:46:38Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:57048->192.168.111.5:6443: read: connection reset by peer
2026-04-19T21:52:44Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - dial tcp 192.168.111.5:6443: connect: connection refused
#2045940607121100800junit22 hours ago
2026-04-19T21:46:24Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-19T21:46:30Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:51694->192.168.111.5:6443: read: connection reset by peer
2026-04-19T21:46:32Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - write tcp 192.168.111.21:55818->192.168.111.5:6443: write: connection reset by peer
2026-04-19T21:46:35Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:53960->192.168.111.5:6443: read: connection reset by peer
2026-04-19T21:46:38Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:57048->192.168.111.5:6443: read: connection reset by peer
2026-04-19T21:52:44Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - dial tcp 192.168.111.5:6443: connect: connection refused
#2045578207905714176junit46 hours ago
2026-04-18T21:41:01Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-18T21:41:02Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:55322->192.168.111.5:6443: read: connection reset by peer
2026-04-18T21:41:02Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:58482->192.168.111.5:6443: read: connection reset by peer
2026-04-18T21:41:06Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - write tcp 192.168.111.21:46488->192.168.111.5:6443: write: connection reset by peer
#2045578207905714176junit46 hours ago
2026-04-18T21:47:35Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-18T21:47:49Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:38538->192.168.111.5:6443: read: connection reset by peer
2026-04-18T21:47:50Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:53878->192.168.111.5:6443: read: connection reset by peer
2026-04-18T21:47:52Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:53042->192.168.111.5:6443: read: connection reset by peer
#2045578207905714176junit46 hours ago
2026-04-18T21:41:01Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-18T21:41:02Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:55322->192.168.111.5:6443: read: connection reset by peer
2026-04-18T21:41:02Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:58482->192.168.111.5:6443: read: connection reset by peer
2026-04-18T21:41:06Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - write tcp 192.168.111.21:46488->192.168.111.5:6443: write: connection reset by peer
pull-ci-openshift-origin-release-4.19-e2e-aws-ovn-serial-1of2 (all) - 3 runs, 67% failed, 50% of failures match = 33% impact
#2045965218684604416junit22 hours ago
2026-04-19T21:44:00Z node/ip-10-0-62-148.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-2d76zktt-84204.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-148.us-west-2.compute.internal?timeout=10s - read tcp 10.0.62.148:42512->10.0.16.235:6443: read: connection reset by peer
2026-04-19T21:47:38Z node/ip-10-0-10-87.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-2d76zktt-84204.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-10-87.us-west-2.compute.internal?timeout=10s - read tcp 10.0.10.87:51396->10.0.16.235:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.20-upgrade-from-stable-4.19-e2e-vsphere-zones-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2045976498074030080junit22 hours ago
2026-04-19T23:02:05Z node/ci-op-p16fx6ir-5a92c-7g2dd-worker-1-9nhxk - reason/FailedToUpdateLease https://api-int.ci-op-p16fx6ir-5a92c.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-p16fx6ir-5a92c-7g2dd-worker-1-9nhxk?timeout=10s - read tcp 10.95.160.23:35394->10.95.160.12:6443: read: connection reset by peer
2026-04-19T23:02:16Z node/ci-op-p16fx6ir-5a92c-7g2dd-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-p16fx6ir-5a92c.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-p16fx6ir-5a92c-7g2dd-master-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045976498074030080junit22 hours ago
2026-04-19T23:02:36Z node/ci-op-p16fx6ir-5a92c-7g2dd-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-p16fx6ir-5a92c.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-p16fx6ir-5a92c-7g2dd-master-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T23:07:54Z node/ci-op-p16fx6ir-5a92c-7g2dd-worker-1-9nhxk - reason/FailedToUpdateLease https://api-int.ci-op-p16fx6ir-5a92c.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-p16fx6ir-5a92c-7g2dd-worker-1-9nhxk?timeout=10s - read tcp 10.95.160.23:55782->10.95.160.12:6443: read: connection reset by peer
2026-04-19T23:07:58Z node/ci-op-p16fx6ir-5a92c-7g2dd-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-p16fx6ir-5a92c.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-p16fx6ir-5a92c-7g2dd-master-2?timeout=10s - write tcp 10.95.160.124:36934->10.95.160.12:6443: write: connection reset by peer
2026-04-19T23:07:58Z node/ci-op-p16fx6ir-5a92c-7g2dd-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-p16fx6ir-5a92c.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-p16fx6ir-5a92c-7g2dd-master-1?timeout=10s - read tcp 10.95.160.122:60750->10.95.160.12:6443: read: connection reset by peer
2026-04-19T23:08:12Z node/ci-op-p16fx6ir-5a92c-7g2dd-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-p16fx6ir-5a92c.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-p16fx6ir-5a92c-7g2dd-master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-nightly-4.19-upgrade-from-stable-4.18-e2e-metal-ipi-ovn-upgrade-network-flow-matrix (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#2045940603774046208junit22 hours ago
2026-04-19T21:55:41Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-19T21:55:48Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:42622->192.168.111.5:6443: read: connection reset by peer
2026-04-19T21:55:50Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:43414->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:00:53Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045940603774046208junit22 hours ago
2026-04-19T22:03:18Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-19T22:03:27Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:54442->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:03:28Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - write tcp 192.168.111.22:39820->192.168.111.5:6443: write: broken pipe
2026-04-19T22:03:33Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:41760->192.168.111.5:6443: read: connection reset by peer
#2045940603774046208junit22 hours ago
2026-04-19T21:55:41Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-19T21:55:48Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:42622->192.168.111.5:6443: read: connection reset by peer
2026-04-19T21:55:50Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:43414->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:00:53Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-multiarch-main-nightly-4.20-ocp-e2e-aws-ovn-multi-a-a (all) - 9 runs, 22% failed, 50% of failures match = 11% impact
#2045956865648496640junit22 hours ago
2026-04-19T20:58:08Z node/ip-10-0-92-115.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-wnlzpwvc-1da68.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-92-115.us-west-2.compute.internal?timeout=10s - read tcp 10.0.92.115:35600->10.0.51.31:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-fencing-recovery-2of3 (all) - 4 runs, 75% failed, 67% of failures match = 50% impact
#2045960392219299840junit22 hours ago
2026-04-19T22:05:47Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T22:19:19Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:54606->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:19:37Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - context deadline exceeded (Client.Timeout exceeded while awaiting headers)
#2045960392219299840junit22 hours ago
2026-04-19T22:37:15Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - context deadline exceeded
2026-04-19T22:53:13Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:59090->192.168.111.5:6443: read: connection reset by peer
2026-04-19T22:53:23Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045598000486551552junit46 hours ago
2026-04-18T22:30:06Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-18T22:44:11Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:43004->192.168.111.5:6443: read: connection reset by peer
2026-04-18T22:44:26Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - context deadline exceeded (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-nightly-4.19-upgrade-from-stable-4.18-e2e-vsphere-upgrade-vcf9 (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#2045970207393976320junit22 hours ago
2026-04-19T22:36:53Z node/ci-op-c3whg6qi-b88fb-xw6zc-worker-0-5dslx - reason/FailedToUpdateLease https://api-int.ci-op-c3whg6qi-b88fb.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-c3whg6qi-b88fb-xw6zc-worker-0-5dslx?timeout=10s - read tcp 10.95.160.120:39734->10.95.160.8:6443: read: connection reset by peer
2026-04-19T22:36:54Z node/ci-op-c3whg6qi-b88fb-xw6zc-worker-0-mk5ql - reason/FailedToUpdateLease https://api-int.ci-op-c3whg6qi-b88fb.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-c3whg6qi-b88fb-xw6zc-worker-0-mk5ql?timeout=10s - read tcp 10.95.160.121:43990->10.95.160.8:6443: read: connection reset by peer
2026-04-19T22:36:55Z node/ci-op-c3whg6qi-b88fb-xw6zc-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-c3whg6qi-b88fb.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-c3whg6qi-b88fb-xw6zc-master-2?timeout=10s - read tcp 10.95.160.116:35174->10.95.160.8:6443: read: connection reset by peer
2026-04-19T22:36:59Z node/ci-op-c3whg6qi-b88fb-xw6zc-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-c3whg6qi-b88fb.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-c3whg6qi-b88fb-xw6zc-master-1?timeout=10s - read tcp 10.95.160.115:37506->10.95.160.8:6443: read: connection reset by peer
2026-04-19T22:37:03Z node/ci-op-c3whg6qi-b88fb-xw6zc-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-c3whg6qi-b88fb.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-c3whg6qi-b88fb-xw6zc-master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045970207393976320junit22 hours ago
2026-04-19T22:49:33Z node/ci-op-c3whg6qi-b88fb-xw6zc-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-c3whg6qi-b88fb.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-c3whg6qi-b88fb-xw6zc-master-0?timeout=10s - dial tcp 10.95.160.8:6443: connect: connection refused
2026-04-19T22:49:46Z node/ci-op-c3whg6qi-b88fb-xw6zc-worker-0-mk5ql - reason/FailedToUpdateLease https://api-int.ci-op-c3whg6qi-b88fb.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-c3whg6qi-b88fb-xw6zc-worker-0-mk5ql?timeout=10s - read tcp 10.95.160.121:53806->10.95.160.8:6443: read: connection reset by peer
2026-04-19T22:49:47Z node/ci-op-c3whg6qi-b88fb-xw6zc-worker-0-5dslx - reason/FailedToUpdateLease https://api-int.ci-op-c3whg6qi-b88fb.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-c3whg6qi-b88fb-xw6zc-worker-0-5dslx?timeout=10s - read tcp 10.95.160.120:56858->10.95.160.8:6443: read: connection reset by peer
2026-04-19T22:49:51Z node/ci-op-c3whg6qi-b88fb-xw6zc-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-c3whg6qi-b88fb.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-c3whg6qi-b88fb-xw6zc-master-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-shiftstack-ci-release-4.22-e2e-openstack-csi-cinder (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2045952590646087680junit23 hours ago
2026-04-19T20:50:47Z node/fgyjcwnn-797da-mpd5n-master-0 - reason/FailedToUpdateLease https://api-int.fgyjcwnn-797da.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/fgyjcwnn-797da-mpd5n-master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T22:12:17Z node/fgyjcwnn-797da-mpd5n-master-0 - reason/FailedToUpdateLease https://api-int.fgyjcwnn-797da.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/fgyjcwnn-797da-mpd5n-master-0?timeout=10s - read tcp 10.0.1.102:37826->10.0.0.5:6443: read: connection reset by peer
2026-04-19T22:15:35Z node/fgyjcwnn-797da-mpd5n-worker-0-4jmpp - reason/FailedToUpdateLease https://api-int.fgyjcwnn-797da.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/fgyjcwnn-797da-mpd5n-worker-0-4jmpp?timeout=10s - read tcp 10.0.1.13:38468->10.0.0.5:6443: read: connection reset by peer
2026-04-19T22:15:43Z node/fgyjcwnn-797da-mpd5n-master-2 - reason/FailedToUpdateLease https://api-int.fgyjcwnn-797da.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/fgyjcwnn-797da-mpd5n-master-2?timeout=10s - read tcp 10.0.1.10:46882->10.0.0.5:6443: read: connection reset by peer
2026-04-19T22:15:48Z node/fgyjcwnn-797da-mpd5n-master-1 - reason/FailedToUpdateLease https://api-int.fgyjcwnn-797da.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/fgyjcwnn-797da-mpd5n-master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045952590646087680junit23 hours ago
2026-04-19T22:16:55Z node/fgyjcwnn-797da-mpd5n-worker-0-ztjff - reason/FailedToUpdateLease https://api-int.fgyjcwnn-797da.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/fgyjcwnn-797da-mpd5n-worker-0-ztjff?timeout=10s - stream error: stream ID 295; INTERNAL_ERROR; received from peer
2026-04-19T22:17:37Z node/fgyjcwnn-797da-mpd5n-worker-0-4jmpp - reason/FailedToUpdateLease https://api-int.fgyjcwnn-797da.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/fgyjcwnn-797da-mpd5n-worker-0-4jmpp?timeout=10s - read tcp 10.0.1.13:35586->10.0.0.5:6443: read: connection reset by peer
2026-04-19T22:17:46Z node/fgyjcwnn-797da-mpd5n-master-0 - reason/FailedToUpdateLease https://api-int.fgyjcwnn-797da.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/fgyjcwnn-797da-mpd5n-master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-shiftstack-ci-release-4.20-e2e-openstack-dualstack (all) - 1 runs, 0% failed, 100% of runs match
#2045946786350108672junit23 hours ago
2026-04-19T20:33:46Z node/471rm04i-4d9bd-7q77n-master-1 - reason/FailedToUpdateLease https://api-int.471rm04i-4d9bd.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/471rm04i-4d9bd-7q77n-master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T20:33:47Z node/471rm04i-4d9bd-7q77n-master-2 - reason/FailedToUpdateLease https://api-int.471rm04i-4d9bd.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/471rm04i-4d9bd-7q77n-master-2?timeout=10s - read tcp 192.168.25.133:48242->192.168.25.112:6443: read: connection reset by peer
2026-04-19T20:33:56Z node/471rm04i-4d9bd-7q77n-master-1 - reason/FailedToUpdateLease https://api-int.471rm04i-4d9bd.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/471rm04i-4d9bd-7q77n-master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045946786350108672junit23 hours ago
2026-04-19T20:34:11Z node/471rm04i-4d9bd-7q77n-master-1 - reason/FailedToUpdateLease https://api-int.471rm04i-4d9bd.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/471rm04i-4d9bd-7q77n-master-1?timeout=10s - http2: client connection lost
2026-04-19T20:34:33Z node/471rm04i-4d9bd-7q77n-master-2 - reason/FailedToUpdateLease https://api-int.471rm04i-4d9bd.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/471rm04i-4d9bd-7q77n-master-2?timeout=10s - read tcp 192.168.25.133:51834->192.168.25.112:6443: read: connection reset by peer
2026-04-19T20:34:33Z node/471rm04i-4d9bd-7q77n-master-1 - reason/FailedToUpdateLease https://api-int.471rm04i-4d9bd.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/471rm04i-4d9bd-7q77n-master-1?timeout=10s - read tcp 192.168.25.250:40174->192.168.25.112:6443: read: connection reset by peer
2026-04-19T20:34:39Z node/471rm04i-4d9bd-7q77n-master-0 - reason/FailedToUpdateLease https://api-int.471rm04i-4d9bd.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/471rm04i-4d9bd-7q77n-master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045946786350108672junit23 hours ago
2026-04-19T20:33:46Z node/471rm04i-4d9bd-7q77n-master-1 - reason/FailedToUpdateLease https://api-int.471rm04i-4d9bd.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/471rm04i-4d9bd-7q77n-master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T20:33:47Z node/471rm04i-4d9bd-7q77n-master-2 - reason/FailedToUpdateLease https://api-int.471rm04i-4d9bd.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/471rm04i-4d9bd-7q77n-master-2?timeout=10s - read tcp 192.168.25.133:48242->192.168.25.112:6443: read: connection reset by peer
2026-04-19T20:33:56Z node/471rm04i-4d9bd-7q77n-master-1 - reason/FailedToUpdateLease https://api-int.471rm04i-4d9bd.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/471rm04i-4d9bd-7q77n-master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045946786350108672junit23 hours ago
2026-04-19T20:34:11Z node/471rm04i-4d9bd-7q77n-master-1 - reason/FailedToUpdateLease https://api-int.471rm04i-4d9bd.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/471rm04i-4d9bd-7q77n-master-1?timeout=10s - http2: client connection lost
2026-04-19T20:34:33Z node/471rm04i-4d9bd-7q77n-master-2 - reason/FailedToUpdateLease https://api-int.471rm04i-4d9bd.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/471rm04i-4d9bd-7q77n-master-2?timeout=10s - read tcp 192.168.25.133:51834->192.168.25.112:6443: read: connection reset by peer
2026-04-19T20:34:33Z node/471rm04i-4d9bd-7q77n-master-1 - reason/FailedToUpdateLease https://api-int.471rm04i-4d9bd.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/471rm04i-4d9bd-7q77n-master-1?timeout=10s - read tcp 192.168.25.250:40174->192.168.25.112:6443: read: connection reset by peer
2026-04-19T20:34:39Z node/471rm04i-4d9bd-7q77n-master-0 - reason/FailedToUpdateLease https://api-int.471rm04i-4d9bd.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/471rm04i-4d9bd-7q77n-master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-nightly-4.18-aws-ovn-network-flow-matrix (all) - 2 runs, 0% failed, 100% of runs match
#2045940591212105728junit23 hours ago
2026-04-19T19:54:44Z node/ip-10-0-126-35.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4mft015p-16905.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-126-35.us-east-2.compute.internal?timeout=10s - read tcp 10.0.126.35:38176->10.0.105.191:6443: read: connection reset by peer
2026-04-19T19:55:12Z node/ip-10-0-67-72.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4mft015p-16905.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-67-72.us-east-2.compute.internal?timeout=10s - read tcp 10.0.67.72:38484->10.0.105.191:6443: read: connection reset by peer
2026-04-19T21:21:33Z node/ip-10-0-114-204.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4mft015p-16905.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-114-204.us-east-2.compute.internal?timeout=10s - read tcp 10.0.114.204:39936->10.0.105.191:6443: read: connection reset by peer
2026-04-19T21:26:30Z node/ip-10-0-72-207.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4mft015p-16905.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-72-207.us-east-2.compute.internal?timeout=10s - read tcp 10.0.72.207:59616->10.0.105.191:6443: read: connection reset by peer
2026-04-19T21:29:52Z node/ip-10-0-114-204.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4mft015p-16905.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-114-204.us-east-2.compute.internal?timeout=10s - read tcp 10.0.114.204:41504->10.0.23.160:6443: read: connection reset by peer
2026-04-19T21:30:04Z node/ip-10-0-4-221.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4mft015p-16905.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-4-221.us-east-2.compute.internal?timeout=10s - read tcp 10.0.4.221:46684->10.0.105.191:6443: read: connection reset by peer
2026-04-19T21:55:22Z node/ip-10-0-114-204.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4mft015p-16905.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-114-204.us-east-2.compute.internal?timeout=10s - context deadline exceeded
#2045578192047050752junit47 hours ago
2026-04-18T21:10:47Z node/ip-10-0-117-92.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bwzh3vxv-16905.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-117-92.ec2.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-18T21:19:42Z node/ip-10-0-24-120.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bwzh3vxv-16905.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-120.ec2.internal?timeout=10s - read tcp 10.0.24.120:38220->10.0.46.246:6443: read: connection reset by peer
2026-04-18T21:23:50Z node/ip-10-0-96-205.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bwzh3vxv-16905.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-96-205.ec2.internal?timeout=10s - read tcp 10.0.96.205:55020->10.0.46.246:6443: read: connection reset by peer
2026-04-18T21:27:44Z node/ip-10-0-62-146.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bwzh3vxv-16905.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-146.ec2.internal?timeout=10s - read tcp 10.0.62.146:59090->10.0.46.246:6443: read: connection reset by peer
2026-04-18T21:40:30Z node/ip-10-0-62-146.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bwzh3vxv-16905.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-146.ec2.internal?timeout=10s - read tcp 10.0.62.146:44968->10.0.46.246:6443: read: connection reset by peer
2026-04-18T21:40:32Z node/ip-10-0-111-186.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-bwzh3vxv-16905.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-111-186.ec2.internal?timeout=10s - read tcp 10.0.111.186:46328->10.0.117.46:6443: read: connection reset by peer
#2045578192047050752junit47 hours ago
Apr 18 21:36:42.716 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ServiceAccount) openshift-multus/multus-ac: failed to apply / update (/v1, Kind=ServiceAccount) openshift-multus/multus-ac: Patch "https://api-int.ci-op-bwzh3vxv-16905.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ac?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.123.193:54002->10.0.117.46:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 18 21:36:42.716 - 28s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (/v1, Kind=ServiceAccount) openshift-multus/multus-ac: failed to apply / update (/v1, Kind=ServiceAccount) openshift-multus/multus-ac: Patch "https://api-int.ci-op-bwzh3vxv-16905.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ac?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.123.193:54002->10.0.117.46:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 18 21:37:10.904 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
pull-ci-openshift-cluster-baremetal-operator-main-e2e-metal-ipi-serial-ipv4 (all) - 2 runs, 0% failed, 50% of runs match
#2045910297721442304junit23 hours ago
2026-04-19T18:59:08Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:48272->192.168.111.5:6443: read: connection reset by peer
2026-04-19T18:59:08Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:47148->192.168.111.5:6443: read: connection reset by peer
2026-04-19T18:59:10Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:34464->192.168.111.5:6443: read: connection reset by peer
2026-04-19T18:59:16Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-ci-4.20-upgrade-from-stable-4.19-e2e-vsphere-ovn-upgrade (all) - 8 runs, 13% failed, 300% of failures match = 38% impact
#2045949661629386752junit23 hours ago
2026-04-19T21:15:20Z node/ci-op-ip28cbn3-cc454-v2zzv-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-ip28cbn3-cc454.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-ip28cbn3-cc454-v2zzv-master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T21:43:43Z node/ci-op-ip28cbn3-cc454-v2zzv-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-ip28cbn3-cc454.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-ip28cbn3-cc454-v2zzv-master-1?timeout=10s - read tcp 10.95.160.100:39478->10.95.160.14:6443: read: connection reset by peer
2026-04-19T21:43:53Z node/ci-op-ip28cbn3-cc454-v2zzv-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-ip28cbn3-cc454.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-ip28cbn3-cc454-v2zzv-master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045811473841655808junit33 hours ago
2026-04-19T12:20:55Z node/ci-op-by4qrqjv-cc454-kt46p-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-by4qrqjv-cc454.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-by4qrqjv-cc454-kt46p-master-1?timeout=10s - write tcp 10.93.152.104:45196->10.93.152.8:6443: write: connection reset by peer
2026-04-19T12:20:59Z node/ci-op-by4qrqjv-cc454-kt46p-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-by4qrqjv-cc454.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-by4qrqjv-cc454-kt46p-master-0?timeout=10s - read tcp 10.93.152.102:51492->10.93.152.8:6443: read: connection reset by peer
2026-04-19T12:21:00Z node/ci-op-by4qrqjv-cc454-kt46p-worker-0-zdbpp - reason/FailedToUpdateLease https://api-int.ci-op-by4qrqjv-cc454.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-by4qrqjv-cc454-kt46p-worker-0-zdbpp?timeout=10s - read tcp 10.93.152.110:48100->10.93.152.8:6443: read: connection reset by peer
2026-04-19T12:21:10Z node/ci-op-by4qrqjv-cc454-kt46p-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-by4qrqjv-cc454.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-by4qrqjv-cc454-kt46p-master-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045600180530581504junit47 hours ago
2026-04-18T22:14:02Z node/ci-op-0csfs0hb-cc454-przmn-worker-0-kl6c4 - reason/FailedToUpdateLease https://api-int.ci-op-0csfs0hb-cc454.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-0csfs0hb-cc454-przmn-worker-0-kl6c4?timeout=10s - dial tcp 10.95.160.16:6443: connect: connection refused
2026-04-18T22:14:13Z node/ci-op-0csfs0hb-cc454-przmn-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-0csfs0hb-cc454.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-0csfs0hb-cc454-przmn-master-1?timeout=10s - read tcp 10.95.160.40:56220->10.95.160.16:6443: read: connection reset by peer
2026-04-18T22:14:13Z node/ci-op-0csfs0hb-cc454-przmn-worker-0-kl6c4 - reason/FailedToUpdateLease https://api-int.ci-op-0csfs0hb-cc454.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-0csfs0hb-cc454-przmn-worker-0-kl6c4?timeout=10s - read tcp 10.95.160.51:39074->10.95.160.16:6443: read: connection reset by peer
2026-04-18T22:14:13Z node/ci-op-0csfs0hb-cc454-przmn-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-0csfs0hb-cc454.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-0csfs0hb-cc454-przmn-master-2?timeout=10s - write tcp 10.95.160.42:42612->10.95.160.16:6443: write: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-dualstack-rhcos10-techpreview (all) - 4 runs, 100% failed, 75% of failures match = 75% impact
#2045931702550794240junit23 hours ago
2026-04-19T21:45:54Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.21:39250->192.168.111.5:6443: read: connection reset by peer
2026-04-19T21:45:56Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:50774->192.168.111.5:6443: read: connection reset by peer
2026-04-19T21:45:56Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:42996->192.168.111.5:6443: read: connection reset by peer
2026-04-19T21:45:58Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:40728->192.168.111.5:6443: read: connection reset by peer
2026-04-19T21:46:09Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045750506638282752junit35 hours ago
2026-04-19T09:34:42Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:54640->192.168.111.5:6443: read: connection reset by peer
2026-04-19T09:34:43Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:59398->192.168.111.5:6443: read: connection reset by peer
2026-04-19T09:34:45Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045569059763785728junit47 hours ago
': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
namespace/e2e-test-oc-adm-must-gather-8txcc node/worker-2 pod/perf-node-gather-daemonset-9dvkg hmsg/24281dc431 - race condition: sandbox failure at pod creation time (new pod created -104.34 seconds after deletion started) - firstTimestamp/2026-04-18T21:35:28Z interesting/true lastTimestamp/2026-04-18T21:35:28Z reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perf-node-gather-daemonset-9dvkg_e2e-test-oc-adm-must-gather-8txcc_fd92142d-066b-4505-8bd3-fc1670bcb06b_0(bf61a89dcbc1c469865b0ba360960ff40fd2a217434fd8a5d0f5aad8f09329cb): error adding pod e2e-test-oc-adm-must-gather-8txcc_perf-node-gather-daemonset-9dvkg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"bf61a89dcbc1c469865b0ba360960ff40fd2a217434fd8a5d0f5aad8f09329cb" Netns:"/var/run/netns/bbd70a87-24e3-410d-9bf7-a465bb945b72" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=e2e-test-oc-adm-must-gather-8txcc;K8S_POD_NAME=perf-node-gather-daemonset-9dvkg;K8S_POD_INFRA_CONTAINER_ID=bf61a89dcbc1c469865b0ba360960ff40fd2a217434fd8a5d0f5aad8f09329cb;K8S_POD_UID=fd92142d-066b-4505-8bd3-fc1670bcb06b" Path:"" ERRORED: error configuring pod [e2e-test-oc-adm-must-gather-8txcc/perf-node-gather-daemonset-9dvkg] networking: Multus: [e2e-test-oc-adm-must-gather-8txcc/perf-node-gather-daemonset-9dvkg/fd92142d-066b-4505-8bd3-fc1670bcb06b]: error waiting for pod: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: read tcp 192.168.111.25:50318->192.168.111.5:6443: read: connection reset by peer
': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
namespace/openshift-must-gather-k694p node/master-0.ostest.test.metalkube.org pod/perf-node-gather-daemonset-v7gg4 hmsg/ae55ad8bfa - race condition: sandbox failure at pod creation time (new pod created -92.87 seconds after deletion started) - firstTimestamp/2026-04-18T21:35:42Z interesting/true lastTimestamp/2026-04-18T21:35:42Z reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perf-node-gather-daemonset-v7gg4_openshift-must-gather-k694p_e3f80e19-63aa-444d-9416-83828bc8e815_0(7fea41dd23f0a4001442dc16d8574fdf6d3d5b18975a1e9519cd33b1a91fcf08): error adding pod openshift-must-gather-k694p_perf-node-gather-daemonset-v7gg4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7fea41dd23f0a4001442dc16d8574fdf6d3d5b18975a1e9519cd33b1a91fcf08" Netns:"/var/run/netns/0f1fbd25-2f7e-4b2c-9a8f-c41292fcf627" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-must-gather-k694p;K8S_POD_NAME=perf-node-gather-daemonset-v7gg4;K8S_POD_INFRA_CONTAINER_ID=7fea41dd23f0a4001442dc16d8574fdf6d3d5b18975a1e9519cd33b1a91fcf08;K8S_POD_UID=e3f80e19-63aa-444d-9416-83828bc8e815" Path:"" ERRORED: error configuring pod [openshift-must-gather-k694p/perf-node-gather-daemonset-v7gg4] networking: Multus: [openshift-must-gather-k694p/perf-node-gather-daemonset-v7gg4/e3f80e19-63aa-444d-9416-83828bc8e815]: error waiting for pod: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: read tcp 192.168.111.20:34978->192.168.111.5:6443: read: connection reset by peer
': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
#2045569059763785728junit47 hours ago
2026-04-18T21:35:21Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:39714->192.168.111.5:6443: read: connection reset by peer
2026-04-18T21:35:39Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.23:36242->192.168.111.5:6443: read: connection reset by peer
2026-04-18T21:35:39Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.20:34518->192.168.111.5:6443: read: connection reset by peer
2026-04-18T21:35:39Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.24:59936->192.168.111.5:6443: read: connection reset by peer
2026-04-18T21:35:40Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:41660->192.168.111.5:6443: read: connection reset by peer
2026-04-18T21:35:43Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - context deadline exceeded
periodic-ci-openshift-release-main-ci-4.16-e2e-vsphere-ovn-upgrade (all) - 2 runs, 0% failed, 100% of runs match
#2045952338572611584junit23 hours ago
2026-04-19T21:48:16Z node/ci-op-97ds1f1c-8c867-ngnk4-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-97ds1f1c-8c867.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-97ds1f1c-8c867-ngnk4-master-2?timeout=10s - dial tcp 10.93.152.8:6443: connect: connection refused
2026-04-19T21:48:25Z node/ci-op-97ds1f1c-8c867-ngnk4-worker-0-sqklb - reason/FailedToUpdateLease https://api-int.ci-op-97ds1f1c-8c867.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-97ds1f1c-8c867-ngnk4-worker-0-sqklb?timeout=10s - read tcp 10.93.152.52:34992->10.93.152.8:6443: read: connection reset by peer
2026-04-19T21:48:27Z node/ci-op-97ds1f1c-8c867-ngnk4-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-97ds1f1c-8c867.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-97ds1f1c-8c867-ngnk4-master-2?timeout=10s - read tcp 10.93.152.48:52380->10.93.152.8:6443: read: connection reset by peer
2026-04-19T21:48:28Z node/ci-op-97ds1f1c-8c867-ngnk4-worker-0-6g55p - reason/FailedToUpdateLease https://api-int.ci-op-97ds1f1c-8c867.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-97ds1f1c-8c867-ngnk4-worker-0-6g55p?timeout=10s - read tcp 10.93.152.53:36590->10.93.152.8:6443: read: connection reset by peer
2026-04-19T21:48:39Z node/ci-op-97ds1f1c-8c867-ngnk4-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-97ds1f1c-8c867.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-97ds1f1c-8c867-ngnk4-master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045589947150241792junit2 days ago
2026-04-18T21:13:02Z node/ci-op-qsfvhmt7-8c867-j8qdc-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-qsfvhmt7-8c867.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-qsfvhmt7-8c867-j8qdc-master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-18T21:17:14Z node/ci-op-qsfvhmt7-8c867-j8qdc-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-qsfvhmt7-8c867.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-qsfvhmt7-8c867-j8qdc-master-2?timeout=10s - read tcp 10.93.152.37:37596->10.93.152.4:6443: read: connection reset by peer
2026-04-18T21:17:16Z node/ci-op-qsfvhmt7-8c867-j8qdc-worker-0-f5t76 - reason/FailedToUpdateLease https://api-int.ci-op-qsfvhmt7-8c867.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-qsfvhmt7-8c867-j8qdc-worker-0-f5t76?timeout=10s - read tcp 10.93.152.41:50074->10.93.152.4:6443: read: connection reset by peer
2026-04-18T21:17:16Z node/ci-op-qsfvhmt7-8c867-j8qdc-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-qsfvhmt7-8c867.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-qsfvhmt7-8c867-j8qdc-master-0?timeout=10s - write tcp 10.93.152.39:43022->10.93.152.4:6443: write: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.20-e2e-aws-csi (all) - 6 runs, 17% failed, 300% of failures match = 50% impact
#2045949786615451648junit24 hours ago
2026-04-19T20:32:23Z node/ip-10-0-57-65.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-t46vd4ty-f3272.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-57-65.ec2.internal?timeout=10s - read tcp 10.0.57.65:51134->10.0.124.60:6443: read: connection reset by peer
2026-04-19T20:32:54Z node/ip-10-0-64-249.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-t46vd4ty-f3272.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-64-249.ec2.internal?timeout=10s - read tcp 10.0.64.249:57382->10.0.6.124:6443: read: connection reset by peer
#2045811464819707904junit33 hours ago
2026-04-19T11:15:19Z node/ip-10-0-21-20.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-5cm03m2w-f3272.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-21-20.ec2.internal?timeout=10s - read tcp 10.0.21.20:57590->10.0.42.254:6443: read: connection reset by peer
#2045600180329254912junit47 hours ago
2026-04-18T21:18:49Z node/ip-10-0-21-109.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-f059ycdb-f3272.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-21-109.us-west-2.compute.internal?timeout=10s - read tcp 10.0.21.109:47934->10.0.84.246:6443: read: connection reset by peer
2026-04-18T21:18:54Z node/ip-10-0-117-181.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-f059ycdb-f3272.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-117-181.us-west-2.compute.internal?timeout=10s - read tcp 10.0.117.181:34192->10.0.48.46:6443: read: connection reset by peer
periodic-ci-openshift-cluster-authentication-operator-release-4.20-periodics-e2e-aws-external-oidc-configure (all) - 2 runs, 0% failed, 100% of runs match
#2045922641574891520junit24 hours ago
2026-04-19T19:01:40Z node/ip-10-0-49-80.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xykzq688-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-49-80.us-west-1.compute.internal?timeout=10s - read tcp 10.0.49.80:58094->10.0.96.82:6443: read: connection reset by peer
2026-04-19T19:01:40Z node/ip-10-0-73-248.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xykzq688-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-73-248.us-west-1.compute.internal?timeout=10s - read tcp 10.0.73.248:53408->10.0.4.108:6443: read: connection reset by peer
2026-04-19T19:07:25Z node/ip-10-0-119-118.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xykzq688-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-119-118.us-west-1.compute.internal?timeout=10s - read tcp 10.0.119.118:60320->10.0.96.82:6443: read: connection reset by peer
2026-04-19T19:11:09Z node/ip-10-0-47-215.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xykzq688-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-47-215.us-west-1.compute.internal?timeout=10s - read tcp 10.0.47.215:52910->10.0.4.108:6443: read: connection reset by peer
2026-04-19T19:11:19Z node/ip-10-0-47-215.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xykzq688-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-47-215.us-west-1.compute.internal?timeout=10s - read tcp 10.0.47.215:52992->10.0.4.108:6443: read: connection reset by peer
2026-04-19T19:14:53Z node/ip-10-0-119-118.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xykzq688-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-119-118.us-west-1.compute.internal?timeout=10s - read tcp 10.0.119.118:59670->10.0.96.82:6443: read: connection reset by peer
2026-04-19T19:14:54Z node/ip-10-0-47-215.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xykzq688-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-47-215.us-west-1.compute.internal?timeout=10s - read tcp 10.0.47.215:56288->10.0.4.108:6443: read: connection reset by peer
2026-04-19T19:14:58Z node/ip-10-0-62-22.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xykzq688-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-22.us-west-1.compute.internal?timeout=10s - read tcp 10.0.62.22:38832->10.0.4.108:6443: read: connection reset by peer
2026-04-19T19:25:17Z node/ip-10-0-47-215.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xykzq688-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-47-215.us-west-1.compute.internal?timeout=10s - read tcp 10.0.47.215:42536->10.0.96.82:6443: read: connection reset by peer
2026-04-19T19:28:48Z node/ip-10-0-119-118.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xykzq688-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-119-118.us-west-1.compute.internal?timeout=10s - read tcp 10.0.119.118:42418->10.0.96.82:6443: read: connection reset by peer
2026-04-19T19:28:52Z node/ip-10-0-47-215.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xykzq688-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-47-215.us-west-1.compute.internal?timeout=10s - read tcp 10.0.47.215:41114->10.0.4.108:6443: read: connection reset by peer
2026-04-19T19:28:54Z node/ip-10-0-49-80.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xykzq688-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-49-80.us-west-1.compute.internal?timeout=10s - read tcp 10.0.49.80:47558->10.0.96.82:6443: read: connection reset by peer
2026-04-19T19:28:55Z node/ip-10-0-62-22.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xykzq688-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-22.us-west-1.compute.internal?timeout=10s - read tcp 10.0.62.22:52924->10.0.96.82:6443: read: connection reset by peer
2026-04-19T19:34:20Z node/ip-10-0-47-215.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-xykzq688-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-47-215.us-west-1.compute.internal?timeout=10s - read tcp 10.0.47.215:38346->10.0.96.82:6443: read: connection reset by peer

... 44 lines not shown

#2045560250504843264junit2 days ago
2026-04-18T18:46:41Z node/ip-10-0-46-3.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3h4m0xht-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-46-3.ec2.internal?timeout=10s - read tcp 10.0.46.3:59896->10.0.67.49:6443: read: connection reset by peer
2026-04-18T18:59:35Z node/ip-10-0-99-157.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3h4m0xht-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-99-157.ec2.internal?timeout=10s - read tcp 10.0.99.157:52118->10.0.10.175:6443: read: connection reset by peer
2026-04-18T18:59:38Z node/ip-10-0-46-3.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3h4m0xht-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-46-3.ec2.internal?timeout=10s - read tcp 10.0.46.3:54796->10.0.67.49:6443: read: connection reset by peer
2026-04-18T18:59:53Z node/ip-10-0-55-161.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3h4m0xht-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-55-161.ec2.internal?timeout=10s - read tcp 10.0.55.161:52642->10.0.10.175:6443: read: connection reset by peer
2026-04-18T19:03:56Z node/ip-10-0-71-225.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3h4m0xht-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-225.ec2.internal?timeout=10s - read tcp 10.0.71.225:60914->10.0.10.175:6443: read: connection reset by peer
2026-04-18T19:07:07Z node/ip-10-0-46-3.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3h4m0xht-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-46-3.ec2.internal?timeout=10s - read tcp 10.0.46.3:56616->10.0.67.49:6443: read: connection reset by peer
2026-04-18T19:07:10Z node/ip-10-0-40-50.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3h4m0xht-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-40-50.ec2.internal?timeout=10s - read tcp 10.0.40.50:56170->10.0.67.49:6443: read: connection reset by peer
2026-04-18T19:07:11Z node/ip-10-0-71-225.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3h4m0xht-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-225.ec2.internal?timeout=10s - read tcp 10.0.71.225:54296->10.0.67.49:6443: read: connection reset by peer
2026-04-18T19:07:14Z node/ip-10-0-50-61.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3h4m0xht-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-50-61.ec2.internal?timeout=10s - read tcp 10.0.50.61:58278->10.0.67.49:6443: read: connection reset by peer
2026-04-18T19:07:45Z node/ip-10-0-50-61.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3h4m0xht-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-50-61.ec2.internal?timeout=10s - read tcp 10.0.50.61:58034->10.0.10.175:6443: read: connection reset by peer
2026-04-18T19:12:40Z node/ip-10-0-99-157.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3h4m0xht-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-99-157.ec2.internal?timeout=10s - read tcp 10.0.99.157:49114->10.0.67.49:6443: read: connection reset by peer
2026-04-18T19:12:52Z node/ip-10-0-50-61.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3h4m0xht-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-50-61.ec2.internal?timeout=10s - read tcp 10.0.50.61:52842->10.0.10.175:6443: read: connection reset by peer
2026-04-18T19:13:07Z node/ip-10-0-71-225.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3h4m0xht-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-225.ec2.internal?timeout=10s - read tcp 10.0.71.225:43226->10.0.10.175:6443: read: connection reset by peer
2026-04-18T19:13:33Z node/ip-10-0-50-61.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-3h4m0xht-e293b.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-50-61.ec2.internal?timeout=10s - read tcp 10.0.50.61:53038->10.0.10.175:6443: read: connection reset by peer

... 11 lines not shown

rehearse-77842-periodic-ci-openshift-release-main-nightly-4.22-upgrade-from-stable-4.20-e2e-aws-ovn-upgrade-paused (all) - 4 runs, 25% failed, 100% of failures match = 25% impact
#2045903008104976384junit24 hours ago
2026-04-19T17:40:22Z node/ip-10-0-73-159.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-2s1sh88g-5f941.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-73-159.ec2.internal?timeout=10s - read tcp 10.0.73.159:43612->10.0.16.172:6443: read: connection reset by peer
2026-04-19T18:28:40Z node/ip-10-0-124-237.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-2s1sh88g-5f941.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-124-237.ec2.internal?timeout=10s - context deadline exceeded
2026-04-19T18:54:17Z node/ip-10-0-124-237.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-2s1sh88g-5f941.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-124-237.ec2.internal?timeout=10s - read tcp 10.0.124.237:34850->10.0.72.34:6443: read: connection reset by peer
2026-04-19T18:58:13Z node/ip-10-0-14-147.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-2s1sh88g-5f941.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-14-147.ec2.internal?timeout=10s - read tcp 10.0.14.147:48148->10.0.16.172:6443: read: connection reset by peer
2026-04-19T19:02:16Z node/ip-10-0-124-237.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-2s1sh88g-5f941.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-124-237.ec2.internal?timeout=10s - read tcp 10.0.124.237:47412->10.0.72.34:6443: read: connection reset by peer
2026-04-19T19:02:23Z node/ip-10-0-123-207.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-2s1sh88g-5f941.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-123-207.ec2.internal?timeout=10s - read tcp 10.0.123.207:44734->10.0.72.34:6443: read: connection reset by peer
2026-04-19T19:56:39Z node/ip-10-0-14-147.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-2s1sh88g-5f941.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-14-147.ec2.internal?timeout=10s - read tcp 10.0.14.147:36148->10.0.72.34:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.21-e2e-vsphere-ovn-etcd-scaling (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2045923397661102080junit24 hours ago
2026-04-19T19:14:47Z node/ci-op-piy0zzx7-1e19c-q9c7j-worker-0-lwdjn - reason/FailedToUpdateLease https://api-int.ci-op-piy0zzx7-1e19c.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-piy0zzx7-1e19c-q9c7j-worker-0-lwdjn?timeout=10s - context deadline exceeded
2026-04-19T19:56:56Z node/ci-op-piy0zzx7-1e19c-q9c7j-worker-0-lwdjn - reason/FailedToUpdateLease https://api-int.ci-op-piy0zzx7-1e19c.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-piy0zzx7-1e19c-q9c7j-worker-0-lwdjn?timeout=10s - read tcp 10.93.157.28:38002->10.93.157.4:6443: read: connection reset by peer
2026-04-19T19:56:56Z node/ci-op-piy0zzx7-1e19c-q9c7j-master-c6zc4-0 - reason/FailedToUpdateLease https://api-int.ci-op-piy0zzx7-1e19c.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-piy0zzx7-1e19c-q9c7j-master-c6zc4-0?timeout=10s - read tcp 10.93.157.49:42916->10.93.157.4:6443: read: connection reset by peer
2026-04-19T19:56:56Z node/ci-op-piy0zzx7-1e19c-q9c7j-master-jr8cp-2 - reason/FailedToUpdateLease https://api-int.ci-op-piy0zzx7-1e19c.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-piy0zzx7-1e19c-q9c7j-master-jr8cp-2?timeout=10s - read tcp 10.93.157.50:33218->10.93.157.4:6443: read: connection reset by peer
2026-04-19T20:09:17Z node/ci-op-piy0zzx7-1e19c-q9c7j-worker-0-twrbr - reason/FailedToUpdateLease https://api-int.ci-op-piy0zzx7-1e19c.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-piy0zzx7-1e19c-q9c7j-worker-0-twrbr?timeout=10s - context deadline exceeded
#2045923397661102080junit24 hours ago
2026-04-19T20:45:56Z node/ci-op-piy0zzx7-1e19c-q9c7j-master-jr8cp-2 - reason/FailedToUpdateLease https://api-int.ci-op-piy0zzx7-1e19c.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-piy0zzx7-1e19c-q9c7j-master-jr8cp-2?timeout=10s - context deadline exceeded
2026-04-19T20:51:55Z node/ci-op-piy0zzx7-1e19c-q9c7j-master-jr8cp-2 - reason/FailedToUpdateLease https://api-int.ci-op-piy0zzx7-1e19c.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-piy0zzx7-1e19c-q9c7j-master-jr8cp-2?timeout=10s - read tcp 10.93.157.50:60664->10.93.157.4:6443: read: connection reset by peer
2026-04-19T20:51:55Z node/ci-op-piy0zzx7-1e19c-q9c7j-master-5sbgb-565wc-1 - reason/FailedToUpdateLease https://api-int.ci-op-piy0zzx7-1e19c.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-piy0zzx7-1e19c-q9c7j-master-5sbgb-565wc-1?timeout=10s - write tcp 10.93.157.71:36628->10.93.157.4:6443: write: connection reset by peer
2026-04-19T20:51:56Z node/ci-op-piy0zzx7-1e19c-q9c7j-worker-0-twrbr - reason/FailedToUpdateLease https://api-int.ci-op-piy0zzx7-1e19c.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-piy0zzx7-1e19c-q9c7j-worker-0-twrbr?timeout=10s - read tcp 10.93.157.29:53082->10.93.157.4:6443: read: connection reset by peer
2026-04-19T20:52:01Z node/ci-op-piy0zzx7-1e19c-q9c7j-worker-0-lwdjn - reason/FailedToUpdateLease https://api-int.ci-op-piy0zzx7-1e19c.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-piy0zzx7-1e19c-q9c7j-worker-0-lwdjn?timeout=10s - read tcp 10.93.157.28:58844->10.93.157.4:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn-upgrade (all) - 84 runs, 20% failed, 12% of failures match = 2% impact
#2045906022245076992junit24 hours ago
# [sig-node] Pod InPlace Resize Container burstable pods - 1 container with all requests & limits set and resize policy no restart resizing mem requests
fail [k8s.io/kubernetes/test/e2e/framework/pod/pod_client.go:236]: Failed to delete pod "resize-test-c7c9d": Delete "https://api.ci-op-hcmx1rdg-c8ea2.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/e2e-pod-resize-tests-6339/pods/resize-test-c7c9d": read tcp 10.129.180.205:41860->34.49.134.226:6443: read: connection reset by peer
#2045634074583764992junit42 hours ago
stderr:
E0419 01:16:22.630875   20689 request.go:1196] "Unexpected error when reading response body" err="read tcp 10.129.128.10:58712->34.8.25.217:6443: read: connection reset by peer"
error: unexpected error when reading response body. Please retry. Original error: read tcp 10.129.128.10:58712->34.8.25.217:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.19-aws-ovn-network-flow-matrix (all) - 1 runs, 0% failed, 100% of runs match
#2045910410418196480junit24 hours ago
2026-04-19T17:49:05Z node/ip-10-0-13-228.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6ljhdlis-25292.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-228.us-west-1.compute.internal?timeout=10s - read tcp 10.0.13.228:43784->10.0.49.98:6443: read: connection reset by peer
2026-04-19T17:49:16Z node/ip-10-0-68-196.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6ljhdlis-25292.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-68-196.us-west-1.compute.internal?timeout=10s - read tcp 10.0.68.196:35624->10.0.74.242:6443: read: connection reset by peer
2026-04-19T19:24:11Z node/ip-10-0-90-205.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6ljhdlis-25292.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-205.us-west-1.compute.internal?timeout=10s - read tcp 10.0.90.205:37284->10.0.74.242:6443: read: connection reset by peer
2026-04-19T19:24:15Z node/ip-10-0-102-101.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6ljhdlis-25292.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-102-101.us-west-1.compute.internal?timeout=10s - read tcp 10.0.102.101:53022->10.0.49.98:6443: read: connection reset by peer
2026-04-19T19:28:10Z node/ip-10-0-82-185.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6ljhdlis-25292.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-82-185.us-west-1.compute.internal?timeout=10s - read tcp 10.0.82.185:36876->10.0.49.98:6443: read: connection reset by peer
2026-04-19T19:32:13Z node/ip-10-0-102-101.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6ljhdlis-25292.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-102-101.us-west-1.compute.internal?timeout=10s - read tcp 10.0.102.101:59912->10.0.74.242:6443: read: connection reset by peer
2026-04-19T19:36:47Z node/ip-10-0-60-61.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6ljhdlis-25292.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-61.us-west-1.compute.internal?timeout=10s - read tcp 10.0.60.61:39698->10.0.49.98:6443: read: connection reset by peer
2026-04-19T19:36:55Z node/ip-10-0-90-205.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6ljhdlis-25292.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-205.us-west-1.compute.internal?timeout=10s - read tcp 10.0.90.205:48812->10.0.49.98:6443: read: connection reset by peer
2026-04-19T19:36:56Z node/ip-10-0-68-196.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6ljhdlis-25292.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-68-196.us-west-1.compute.internal?timeout=10s - read tcp 10.0.68.196:44708->10.0.49.98:6443: read: connection reset by peer
2026-04-19T19:36:57Z node/ip-10-0-60-61.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6ljhdlis-25292.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-61.us-west-1.compute.internal?timeout=10s - read tcp 10.0.60.61:39892->10.0.49.98:6443: read: connection reset by peer
2026-04-19T19:37:01Z node/ip-10-0-82-185.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6ljhdlis-25292.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-82-185.us-west-1.compute.internal?timeout=10s - read tcp 10.0.82.185:55022->10.0.74.242:6443: read: connection reset by peer
2026-04-19T19:40:51Z node/ip-10-0-90-205.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6ljhdlis-25292.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-205.us-west-1.compute.internal?timeout=10s - read tcp 10.0.90.205:33544->10.0.49.98:6443: read: connection reset by peer
2026-04-19T19:40:53Z node/ip-10-0-13-228.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6ljhdlis-25292.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-228.us-west-1.compute.internal?timeout=10s - read tcp 10.0.13.228:55140->10.0.49.98:6443: read: connection reset by peer
2026-04-19T20:07:24Z node/ip-10-0-13-228.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-6ljhdlis-25292.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-13-228.us-west-1.compute.internal?timeout=10s - context deadline exceeded
periodic-ci-openshift-release-main-ci-4.12-e2e-aws-sdn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#2045924655730003968junit25 hours ago
Apr 19 18:45:46.393 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update servicemonitor "openshift-cluster-version/cluster-version-operator" (10 of 832)
Apr 19 18:52:29.798 E clusteroperator/network condition/Degraded status/True reason/ApplyOperatorConfig changed: Error while updating operator configuration: could not apply (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io: failed to apply / update (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io: Patch "https://api-int.ci-op-q6xxnshw-02996.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/ippools.whereabouts.cni.cncf.io?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.177.136:35584->10.0.235.136:6443: read: connection reset by peer
Apr 19 18:52:29.798 - 35s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io: failed to apply / update (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io: Patch "https://api-int.ci-op-q6xxnshw-02996.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/ippools.whereabouts.cni.cncf.io?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.177.136:35584->10.0.235.136:6443: read: connection reset by peer
Apr 19 18:58:44.872 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-guard-ip-10-0-177-136.ec2.internal node/ip-10-0-177-136.ec2.internal uid/9a152c06-b6b3-4bec-9fd3-bef605b60a70 container/guard reason/ContainerExit code/137 cause/ContainerStatusUnknown The container could not be located when the pod was deleted.  The container used to be Running
#2045924655730003968junit25 hours ago
Apr 19 18:52:29.798 - 35s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io: failed to apply / update (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io: Patch "https://api-int.ci-op-q6xxnshw-02996.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/ippools.whereabouts.cni.cncf.io?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.177.136:35584->10.0.235.136:6443: read: connection reset by peer
periodic-ci-openshift-cluster-authentication-operator-release-4.20-periodics-e2e-aws-external-oidc-rollback (all) - 1 runs, 0% failed, 100% of runs match
#2045922642170482688junit25 hours ago
2026-04-19T18:52:16Z node/ip-10-0-86-139.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4nsi9cb1-c32f2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-86-139.us-west-1.compute.internal?timeout=10s - read tcp 10.0.86.139:40312->10.0.23.8:6443: read: connection reset by peer
2026-04-19T18:59:57Z node/ip-10-0-86-139.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4nsi9cb1-c32f2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-86-139.us-west-1.compute.internal?timeout=10s - read tcp 10.0.86.139:40386->10.0.23.8:6443: read: connection reset by peer
2026-04-19T19:00:02Z node/ip-10-0-80-82.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4nsi9cb1-c32f2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-80-82.us-west-1.compute.internal?timeout=10s - read tcp 10.0.80.82:49706->10.0.113.97:6443: read: connection reset by peer
2026-04-19T19:09:14Z node/ip-10-0-80-82.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4nsi9cb1-c32f2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-80-82.us-west-1.compute.internal?timeout=10s - read tcp 10.0.80.82:57326->10.0.23.8:6443: read: connection reset by peer
2026-04-19T19:09:14Z node/ip-10-0-103-247.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4nsi9cb1-c32f2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-247.us-west-1.compute.internal?timeout=10s - read tcp 10.0.103.247:34796->10.0.23.8:6443: read: connection reset by peer
2026-04-19T19:09:26Z node/ip-10-0-25-154.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4nsi9cb1-c32f2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-154.us-west-1.compute.internal?timeout=10s - read tcp 10.0.25.154:41336->10.0.113.97:6443: read: connection reset by peer
2026-04-19T19:13:09Z node/ip-10-0-103-247.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4nsi9cb1-c32f2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-247.us-west-1.compute.internal?timeout=10s - read tcp 10.0.103.247:37442->10.0.113.97:6443: read: connection reset by peer
2026-04-19T19:13:10Z node/ip-10-0-79-52.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4nsi9cb1-c32f2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-79-52.us-west-1.compute.internal?timeout=10s - read tcp 10.0.79.52:35604->10.0.113.97:6443: read: connection reset by peer
2026-04-19T19:13:20Z node/ip-10-0-25-154.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4nsi9cb1-c32f2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-154.us-west-1.compute.internal?timeout=10s - read tcp 10.0.25.154:43428->10.0.113.97:6443: read: connection reset by peer
2026-04-19T19:18:56Z node/ip-10-0-103-247.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4nsi9cb1-c32f2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-103-247.us-west-1.compute.internal?timeout=10s - read tcp 10.0.103.247:53172->10.0.23.8:6443: read: connection reset by peer
2026-04-19T19:19:07Z node/ip-10-0-79-52.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4nsi9cb1-c32f2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-79-52.us-west-1.compute.internal?timeout=10s - read tcp 10.0.79.52:53842->10.0.113.97:6443: read: connection reset by peer
2026-04-19T19:22:52Z node/ip-10-0-25-154.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4nsi9cb1-c32f2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-25-154.us-west-1.compute.internal?timeout=10s - read tcp 10.0.25.154:35682->10.0.23.8:6443: read: connection reset by peer
2026-04-19T19:22:56Z node/ip-10-0-86-139.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4nsi9cb1-c32f2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-86-139.us-west-1.compute.internal?timeout=10s - read tcp 10.0.86.139:52332->10.0.23.8:6443: read: connection reset by peer
2026-04-19T19:23:01Z node/ip-10-0-80-82.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-4nsi9cb1-c32f2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-80-82.us-west-1.compute.internal?timeout=10s - read tcp 10.0.80.82:51640->10.0.23.8:6443: read: connection reset by peer

... 32 lines not shown

pull-ci-openshift-installer-main-e2e-openstack-proxy (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2045831859870371840junit25 hours ago
2026-04-19T13:31:46Z node/xjndmxdv-45d29-76n48-master-1 - reason/FailedToUpdateLease https://api-int.xjndmxdv-45d29.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xjndmxdv-45d29-76n48-master-1?timeout=10s - context deadline exceeded
2026-04-19T14:01:59Z node/xjndmxdv-45d29-76n48-master-2 - reason/FailedToUpdateLease https://api-int.xjndmxdv-45d29.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xjndmxdv-45d29-76n48-master-2?timeout=10s - read tcp 172.16.0.18:39042->172.16.0.5:6443: read: connection reset by peer
2026-04-19T14:02:09Z node/xjndmxdv-45d29-76n48-master-2 - reason/FailedToUpdateLease https://api-int.xjndmxdv-45d29.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xjndmxdv-45d29-76n48-master-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045831859870371840junit25 hours ago
2026-04-19T14:30:46Z node/xjndmxdv-45d29-76n48-worker-0-wd59k - reason/FailedToUpdateLease https://api-int.xjndmxdv-45d29.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xjndmxdv-45d29-76n48-worker-0-wd59k?timeout=10s - context deadline exceeded
2026-04-19T14:30:55Z node/xjndmxdv-45d29-76n48-worker-0-wd59k - reason/FailedToUpdateLease https://api-int.xjndmxdv-45d29.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xjndmxdv-45d29-76n48-worker-0-wd59k?timeout=10s - read tcp 172.16.0.14:46970->172.16.0.5:6443: read: connection reset by peer
2026-04-19T14:31:01Z node/xjndmxdv-45d29-76n48-worker-0-wd59k - reason/FailedToUpdateLease https://api-int.xjndmxdv-45d29.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xjndmxdv-45d29-76n48-worker-0-wd59k?timeout=10s - context deadline exceeded
2026-04-19T14:31:44Z node/xjndmxdv-45d29-76n48-worker-0-wd59k - reason/FailedToUpdateLease https://api-int.xjndmxdv-45d29.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xjndmxdv-45d29-76n48-worker-0-wd59k?timeout=10s - read tcp 172.16.0.14:33500->172.16.0.5:6443: read: connection reset by peer
2026-04-19T14:31:44Z node/xjndmxdv-45d29-76n48-master-2 - reason/FailedToUpdateLease https://api-int.xjndmxdv-45d29.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xjndmxdv-45d29-76n48-master-2?timeout=10s - read tcp 172.16.0.18:37968->172.16.0.5:6443: read: connection reset by peer
2026-04-19T14:31:54Z node/xjndmxdv-45d29-76n48-worker-0-wd59k - reason/FailedToUpdateLease https://api-int.xjndmxdv-45d29.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xjndmxdv-45d29-76n48-worker-0-wd59k?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045831859870371840junit25 hours ago
2026-04-19T14:32:18Z node/xjndmxdv-45d29-76n48-master-1 - reason/FailedToUpdateLease https://api-int.xjndmxdv-45d29.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xjndmxdv-45d29-76n48-master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T14:32:23Z node/xjndmxdv-45d29-76n48-worker-0-wd59k - reason/FailedToUpdateLease https://api-int.xjndmxdv-45d29.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xjndmxdv-45d29-76n48-worker-0-wd59k?timeout=10s - read tcp 172.16.0.14:48440->172.16.0.5:6443: read: connection reset by peer
2026-04-19T14:32:25Z node/xjndmxdv-45d29-76n48-master-2 - reason/FailedToUpdateLease https://api-int.xjndmxdv-45d29.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xjndmxdv-45d29-76n48-master-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045831859870371840junit25 hours ago
2026-04-19T14:34:52Z node/xjndmxdv-45d29-76n48-worker-0-wd59k - reason/FailedToUpdateLease https://api-int.xjndmxdv-45d29.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xjndmxdv-45d29-76n48-worker-0-wd59k?timeout=10s - http2: client connection force closed via ClientConn.Close
2026-04-19T14:34:54Z node/xjndmxdv-45d29-76n48-worker-0-wd59k - reason/FailedToUpdateLease https://api-int.xjndmxdv-45d29.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xjndmxdv-45d29-76n48-worker-0-wd59k?timeout=10s - read tcp 172.16.0.14:39134->172.16.0.5:6443: read: connection reset by peer
2026-04-19T14:35:08Z node/xjndmxdv-45d29-76n48-master-2 - reason/FailedToUpdateLease https://api-int.xjndmxdv-45d29.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xjndmxdv-45d29-76n48-master-2?timeout=10s - write tcp 172.16.0.18:38844->172.16.0.5:6443: write: connection reset by peer
rehearse-77842-periodic-ci-openshift-release-main-nightly-4.20-upgrade-from-stable-4.18-e2e-aws-ovn-upgrade-paused (all) - 1 runs, 0% failed, 100% of runs match
#2045903007882678272junit25 hours ago
2026-04-19T17:32:21Z node/ip-10-0-71-62.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rvr6mt2-eb755.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-62.us-west-1.compute.internal?timeout=10s - read tcp 10.0.71.62:60676->10.0.122.157:6443: read: connection reset by peer
2026-04-19T17:32:29Z node/ip-10-0-7-61.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rvr6mt2-eb755.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-7-61.us-west-1.compute.internal?timeout=10s - read tcp 10.0.7.61:58722->10.0.122.157:6443: read: connection reset by peer
2026-04-19T17:32:37Z node/ip-10-0-6-119.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rvr6mt2-eb755.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-119.us-west-1.compute.internal?timeout=10s - read tcp 10.0.6.119:43874->10.0.48.117:6443: read: connection reset by peer
2026-04-19T17:36:22Z node/ip-10-0-19-135.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rvr6mt2-eb755.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-19-135.us-west-1.compute.internal?timeout=10s - read tcp 10.0.19.135:38776->10.0.48.117:6443: read: connection reset by peer
2026-04-19T17:36:23Z node/ip-10-0-7-61.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rvr6mt2-eb755.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-7-61.us-west-1.compute.internal?timeout=10s - read tcp 10.0.7.61:44562->10.0.122.157:6443: read: connection reset by peer
2026-04-19T17:36:27Z node/ip-10-0-115-132.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rvr6mt2-eb755.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-115-132.us-west-1.compute.internal?timeout=10s - read tcp 10.0.115.132:45136->10.0.122.157:6443: read: connection reset by peer
2026-04-19T18:16:32Z node/ip-10-0-6-119.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rvr6mt2-eb755.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-119.us-west-1.compute.internal?timeout=10s - read tcp 10.0.6.119:46780->10.0.48.117:6443: read: connection reset by peer
2026-04-19T18:50:57Z node/ip-10-0-19-135.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rvr6mt2-eb755.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-19-135.us-west-1.compute.internal?timeout=10s - read tcp 10.0.19.135:46810->10.0.48.117:6443: read: connection reset by peer
2026-04-19T18:55:00Z node/ip-10-0-6-119.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rvr6mt2-eb755.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-119.us-west-1.compute.internal?timeout=10s - read tcp 10.0.6.119:45338->10.0.48.117:6443: read: connection reset by peer
2026-04-19T18:59:16Z node/ip-10-0-19-135.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rvr6mt2-eb755.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-19-135.us-west-1.compute.internal?timeout=10s - read tcp 10.0.19.135:51222->10.0.122.157:6443: read: connection reset by peer
2026-04-19T19:47:14Z node/ip-10-0-71-62.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1rvr6mt2-eb755.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-62.us-west-1.compute.internal?timeout=10s - context deadline exceeded
periodic-ci-shiftstack-ci-release-4.21-e2e-openstack-ovn-etcd-scaling (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2045926671512506368junit25 hours ago
2026-04-19T19:10:41Z node/lgi5pw1m-7d157-gwfwx-worker-0-rqjc2 - reason/FailedToUpdateLease https://api-int.lgi5pw1m-7d157.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/lgi5pw1m-7d157-gwfwx-worker-0-rqjc2?timeout=10s - read tcp 10.0.2.115:55770->10.0.0.5:6443: read: connection reset by peer
2026-04-19T19:10:47Z node/lgi5pw1m-7d157-gwfwx-master-snmnh-0 - reason/FailedToUpdateLease https://api-int.lgi5pw1m-7d157.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/lgi5pw1m-7d157-gwfwx-master-snmnh-0?timeout=10s - read tcp 10.0.2.6:48758->10.0.0.5:6443: read: connection reset by peer
2026-04-19T19:10:48Z node/lgi5pw1m-7d157-gwfwx-worker-0-qgcrr - reason/FailedToUpdateLease https://api-int.lgi5pw1m-7d157.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/lgi5pw1m-7d157-gwfwx-worker-0-qgcrr?timeout=10s - read tcp 10.0.1.212:50734->10.0.0.5:6443: read: connection reset by peer
2026-04-19T19:15:54Z node/lgi5pw1m-7d157-gwfwx-master-snmnh-0 - reason/FailedToUpdateLease https://api-int.lgi5pw1m-7d157.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/lgi5pw1m-7d157-gwfwx-master-snmnh-0?timeout=10s - context deadline exceeded
periodic-ci-openshift-cluster-authentication-operator-release-4.20-periodics-e2e-aws-external-oidc-uid-extra (all) - 1 runs, 0% failed, 100% of runs match
#2045922642216620032junit25 hours ago
2026-04-19T18:45:37Z node/ip-10-0-94-110.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kc0ii9bq-c1ae6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-94-110.us-west-2.compute.internal?timeout=10s - read tcp 10.0.94.110:38460->10.0.119.218:6443: read: connection reset by peer
2026-04-19T18:50:10Z node/ip-10-0-73-244.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kc0ii9bq-c1ae6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-73-244.us-west-2.compute.internal?timeout=10s - read tcp 10.0.73.244:41512->10.0.119.218:6443: read: connection reset by peer
2026-04-19T19:02:31Z node/ip-10-0-83-226.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kc0ii9bq-c1ae6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-83-226.us-west-2.compute.internal?timeout=10s - read tcp 10.0.83.226:49328->10.0.119.218:6443: read: connection reset by peer
2026-04-19T19:02:58Z node/ip-10-0-82-214.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kc0ii9bq-c1ae6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-82-214.us-west-2.compute.internal?timeout=10s - read tcp 10.0.82.214:55516->10.0.119.218:6443: read: connection reset by peer
2026-04-19T19:06:44Z node/ip-10-0-5-222.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kc0ii9bq-c1ae6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-222.us-west-2.compute.internal?timeout=10s - read tcp 10.0.5.222:50840->10.0.119.218:6443: read: connection reset by peer
2026-04-19T19:10:52Z node/ip-10-0-83-226.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kc0ii9bq-c1ae6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-83-226.us-west-2.compute.internal?timeout=10s - read tcp 10.0.83.226:44224->10.0.58.41:6443: read: connection reset by peer
2026-04-19T19:14:21Z node/ip-10-0-82-214.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kc0ii9bq-c1ae6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-82-214.us-west-2.compute.internal?timeout=10s - read tcp 10.0.82.214:41224->10.0.58.41:6443: read: connection reset by peer
2026-04-19T19:25:51Z node/ip-10-0-24-193.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kc0ii9bq-c1ae6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-193.us-west-2.compute.internal?timeout=10s - read tcp 10.0.24.193:40900->10.0.119.218:6443: read: connection reset by peer
2026-04-19T19:30:16Z node/ip-10-0-24-193.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kc0ii9bq-c1ae6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-24-193.us-west-2.compute.internal?timeout=10s - read tcp 10.0.24.193:58582->10.0.119.218:6443: read: connection reset by peer
2026-04-19T19:34:55Z node/ip-10-0-73-244.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kc0ii9bq-c1ae6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-73-244.us-west-2.compute.internal?timeout=10s - read tcp 10.0.73.244:41120->10.0.58.41:6443: read: connection reset by peer
2026-04-19T19:38:43Z node/ip-10-0-5-222.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kc0ii9bq-c1ae6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-222.us-west-2.compute.internal?timeout=10s - read tcp 10.0.5.222:41432->10.0.58.41:6443: read: connection reset by peer
2026-04-19T19:38:50Z node/ip-10-0-73-244.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kc0ii9bq-c1ae6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-73-244.us-west-2.compute.internal?timeout=10s - read tcp 10.0.73.244:55284->10.0.58.41:6443: read: connection reset by peer
2026-04-19T19:38:57Z node/ip-10-0-83-226.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kc0ii9bq-c1ae6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-83-226.us-west-2.compute.internal?timeout=10s - read tcp 10.0.83.226:53488->10.0.119.218:6443: read: connection reset by peer
2026-04-19T19:42:33Z node/ip-10-0-73-244.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kc0ii9bq-c1ae6.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-73-244.us-west-2.compute.internal?timeout=10s - read tcp 10.0.73.244:55846->10.0.119.218:6443: read: connection reset by peer

... 16 lines not shown

periodic-ci-openshift-cluster-network-operator-release-4.19-e2e-aws-ovn-clusternetwork-cidr-expansion (all) - 1 runs, 0% failed, 100% of runs match
#2045911316966674432junit26 hours ago
2026-04-19T18:05:52Z node/ip-10-0-2-164.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-lx7vw1yn-a0384.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-2-164.ec2.internal?timeout=10s - read tcp 10.0.2.164:48782->10.0.76.49:6443: read: connection reset by peer
2026-04-19T18:05:59Z node/ip-10-0-27-43.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-lx7vw1yn-a0384.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-27-43.ec2.internal?timeout=10s - read tcp 10.0.27.43:57014->10.0.76.49:6443: read: connection reset by peer
2026-04-19T18:17:57Z node/ip-10-0-98-171.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-lx7vw1yn-a0384.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-98-171.ec2.internal?timeout=10s - read tcp 10.0.98.171:57654->10.0.60.227:6443: read: connection reset by peer
2026-04-19T18:21:08Z node/ip-10-0-62-7.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-lx7vw1yn-a0384.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-7.ec2.internal?timeout=10s - read tcp 10.0.62.7:55506->10.0.76.49:6443: read: connection reset by peer
2026-04-19T18:21:10Z node/ip-10-0-98-171.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-lx7vw1yn-a0384.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-98-171.ec2.internal?timeout=10s - read tcp 10.0.98.171:34266->10.0.76.49:6443: read: connection reset by peer
2026-04-19T18:21:12Z node/ip-10-0-12-236.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-lx7vw1yn-a0384.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-12-236.ec2.internal?timeout=10s - read tcp 10.0.12.236:43402->10.0.60.227:6443: read: connection reset by peer
2026-04-19T18:25:06Z node/ip-10-0-2-164.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-lx7vw1yn-a0384.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-2-164.ec2.internal?timeout=10s - read tcp 10.0.2.164:57914->10.0.60.227:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-5.0-e2e-gcp-ovn-upgrade (all) - 88 runs, 38% failed, 3% of failures match = 1% impact
#2045882255565393920junit26 hours ago
# [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]
fail [k8s.io/kubernetes/test/e2e/framework/pod/exec_util.go:133]: failed to get pod test-container-pod: unexpected error when reading response body. Please retry. Original error: read tcp 10.129.147.171:40890->34.54.149.91:6443: read: connection reset by peer
periodic-ci-shiftstack-ci-release-4.22-e2e-openstack-dualstack (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2045895715418279936junit26 hours ago
2026-04-19T17:21:21Z node/3sspkr76-bbe08-cpxp8-worker-0-9q9r7 - reason/FailedToUpdateLease https://api-int.3sspkr76-bbe08.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/3sspkr76-bbe08-cpxp8-worker-0-9q9r7?timeout=10s - read tcp 192.168.25.107:48212->192.168.25.144:6443: read: connection reset by peer
2026-04-19T17:21:22Z node/3sspkr76-bbe08-cpxp8-worker-0-9q9r7 - reason/FailedToUpdateLease https://api-int.3sspkr76-bbe08.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/3sspkr76-bbe08-cpxp8-worker-0-9q9r7?timeout=10s - read tcp 192.168.25.107:48426->192.168.25.144:6443: read: connection reset by peer
2026-04-19T17:21:23Z node/3sspkr76-bbe08-cpxp8-master-1 - reason/FailedToUpdateLease https://api-int.3sspkr76-bbe08.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/3sspkr76-bbe08-cpxp8-master-1?timeout=10s - context deadline exceeded
#2045895715418279936junit26 hours ago
2026-04-19T17:21:26Z node/3sspkr76-bbe08-cpxp8-master-2 - reason/FailedToUpdateLease https://api-int.3sspkr76-bbe08.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/3sspkr76-bbe08-cpxp8-master-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T17:21:26Z node/3sspkr76-bbe08-cpxp8-master-2 - reason/FailedToUpdateLease https://api-int.3sspkr76-bbe08.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/3sspkr76-bbe08-cpxp8-master-2?timeout=10s - read tcp 192.168.25.155:48936->192.168.25.144:6443: read: connection reset by peer
2026-04-19T17:21:27Z node/3sspkr76-bbe08-cpxp8-worker-0-9q9r7 - reason/FailedToUpdateLease https://api-int.3sspkr76-bbe08.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/3sspkr76-bbe08-cpxp8-worker-0-9q9r7?timeout=10s - read tcp 192.168.25.107:44666->192.168.25.144:6443: read: connection reset by peer
2026-04-19T17:21:29Z node/3sspkr76-bbe08-cpxp8-worker-0-hq4vq - reason/FailedToUpdateLease https://api-int.3sspkr76-bbe08.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/3sspkr76-bbe08-cpxp8-worker-0-hq4vq?timeout=10s - read tcp 192.168.25.231:45372->192.168.25.144:6443: read: connection reset by peer
2026-04-19T17:21:33Z node/3sspkr76-bbe08-cpxp8-master-1 - reason/FailedToUpdateLease https://api-int.3sspkr76-bbe08.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/3sspkr76-bbe08-cpxp8-master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-cluster-storage-operator-release-4.22-periodics-periodic-e2e-vsphere-ovn-upgrade-check-dev-symlinks (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#2045906283650879488junit27 hours ago
2026-04-19T18:46:40Z node/ci-op-v79w93d6-1788c-z5hvj-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-v79w93d6-1788c.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-v79w93d6-1788c-z5hvj-master-2?timeout=10s - read tcp 10.221.127.55:38448->10.221.127.8:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.18-upgrade-from-stable-4.17-e2e-vsphere-cgroupsv1-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#2045898482945888256junit27 hours ago
2026-04-19T18:08:24Z node/ci-op-y573q0vw-82bd7-5jrks-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-y573q0vw-82bd7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-y573q0vw-82bd7-5jrks-master-0?timeout=10s - http2: client connection lost
2026-04-19T18:13:57Z node/ci-op-y573q0vw-82bd7-5jrks-worker-0-wrhln - reason/FailedToUpdateLease https://api-int.ci-op-y573q0vw-82bd7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-y573q0vw-82bd7-5jrks-worker-0-wrhln?timeout=10s - read tcp 10.38.84.111:54712->10.38.84.10:6443: read: connection reset by peer
2026-04-19T18:13:57Z node/ci-op-y573q0vw-82bd7-5jrks-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-y573q0vw-82bd7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-y573q0vw-82bd7-5jrks-master-1?timeout=10s - read tcp 10.38.84.106:44010->10.38.84.10:6443: read: connection reset by peer
2026-04-19T18:14:01Z node/ci-op-y573q0vw-82bd7-5jrks-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-y573q0vw-82bd7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-y573q0vw-82bd7-5jrks-master-0?timeout=10s - write tcp 10.38.84.108:45446->10.38.84.10:6443: write: connection reset by peer
2026-04-19T18:14:04Z node/ci-op-y573q0vw-82bd7-5jrks-worker-0-d8s97 - reason/FailedToUpdateLease https://api-int.ci-op-y573q0vw-82bd7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-y573q0vw-82bd7-5jrks-worker-0-d8s97?timeout=10s - read tcp 10.38.84.110:46110->10.38.84.10:6443: read: connection reset by peer
2026-04-19T18:14:13Z node/ci-op-y573q0vw-82bd7-5jrks-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-y573q0vw-82bd7.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-y573q0vw-82bd7-5jrks-master-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-nightly-4.19-e2e-metal-ipi-ovn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#2045872310887387136junit27 hours ago
2026-04-19T16:59:26Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-19T16:59:39Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:55286->192.168.111.5:6443: read: connection reset by peer
2026-04-19T17:05:46Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - dial tcp 192.168.111.5:6443: connect: connection refused
#2045872310887387136junit27 hours ago
2026-04-19T17:05:58Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - write tcp 192.168.111.22:41438->192.168.111.5:6443: write: broken pipe
2026-04-19T17:05:59Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:51162->192.168.111.5:6443: read: connection reset by peer
2026-04-19T17:05:59Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:50384->192.168.111.5:6443: read: connection reset by peer
2026-04-19T17:06:03Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:51818->192.168.111.5:6443: read: connection reset by peer
#2045872310887387136junit27 hours ago
2026-04-19T16:59:26Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-19T16:59:39Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:55286->192.168.111.5:6443: read: connection reset by peer
2026-04-19T17:05:46Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - dial tcp 192.168.111.5:6443: connect: connection refused
#2045872310887387136junit27 hours ago
2026-04-19T17:05:58Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - write tcp 192.168.111.22:41438->192.168.111.5:6443: write: broken pipe
2026-04-19T17:05:59Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:51162->192.168.111.5:6443: read: connection reset by peer
2026-04-19T17:05:59Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:50384->192.168.111.5:6443: read: connection reset by peer
2026-04-19T17:06:03Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:51818->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-5.0-e2e-aws-ovn-upgrade-fips (all) - 80 runs, 23% failed, 6% of failures match = 1% impact
#2045857761727614976junit28 hours ago
    <*fmt.wrapError | 0xc001a911e0>:
    error reading from error stream: next reader: read tcp 172.24.30.11:59890->50.18.112.107:6443: read: connection reset by peer
    {
        msg: "error reading from error stream: next reader: read tcp 172.24.30.11:59890->50.18.112.107:6443: read: connection reset by peer",
        err: <*fmt.wrapError | 0xc000df67c0>{
            msg: "next reader: read tcp 172.24.30.11:59890->50.18.112.107:6443: read: connection reset by peer",
            err: <*net.OpError | 0xc000def360>{
periodic-ci-openshift-multiarch-main-nightly-4.19-ocp-e2e-aws-ovn-multi-x-x-to-a-x (all) - 1 runs, 0% failed, 100% of runs match
#2045869542021795840junit28 hours ago
2026-04-19T15:19:13Z node/ip-10-0-39-36.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-jiif0j8k-60439.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-36.ec2.internal?timeout=10s - read tcp 10.0.39.36:47798->10.0.60.81:6443: read: connection reset by peer
2026-04-19T15:22:40Z node/ip-10-0-5-145.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-jiif0j8k-60439.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-145.ec2.internal?timeout=10s - read tcp 10.0.5.145:35440->10.0.60.81:6443: read: connection reset by peer
2026-04-19T15:28:09Z node/ip-10-0-83-35.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-jiif0j8k-60439.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-83-35.ec2.internal?timeout=10s - read tcp 10.0.83.35:51862->10.0.89.106:6443: read: connection reset by peer
2026-04-19T15:40:12Z node/ip-10-0-5-145.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-jiif0j8k-60439.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-145.ec2.internal?timeout=10s - read tcp 10.0.5.145:48260->10.0.60.81:6443: read: connection reset by peer
2026-04-19T15:47:59Z node/ip-10-0-39-36.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-jiif0j8k-60439.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-39-36.ec2.internal?timeout=10s - read tcp 10.0.39.36:57012->10.0.60.81:6443: read: connection reset by peer
2026-04-19T15:48:02Z node/ip-10-0-15-84.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-jiif0j8k-60439.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-15-84.ec2.internal?timeout=10s - read tcp 10.0.15.84:41492->10.0.60.81:6443: read: connection reset by peer
2026-04-19T15:52:06Z node/ip-10-0-15-84.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-jiif0j8k-60439.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-15-84.ec2.internal?timeout=10s - read tcp 10.0.15.84:45518->10.0.60.81:6443: read: connection reset by peer
2026-04-19T15:55:47Z node/ip-10-0-49-219.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-jiif0j8k-60439.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-49-219.ec2.internal?timeout=10s - read tcp 10.0.49.219:43570->10.0.89.106:6443: read: connection reset by peer
2026-04-19T15:56:02Z node/ip-10-0-83-35.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-jiif0j8k-60439.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-83-35.ec2.internal?timeout=10s - read tcp 10.0.83.35:50330->10.0.60.81:6443: read: connection reset by peer
2026-04-19T15:56:12Z node/ip-10-0-83-35.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-jiif0j8k-60439.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-83-35.ec2.internal?timeout=10s - read tcp 10.0.83.35:50378->10.0.60.81:6443: read: connection reset by peer
periodic-ci-shiftstack-ci-release-4.22-e2e-openstack-proxy (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2045855701082836992junit28 hours ago
2026-04-19T14:23:23Z node/zdipgj23-5e9ca-xzfs8-master-2 - reason/FailedToUpdateLease https://api-int.zdipgj23-5e9ca.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/zdipgj23-5e9ca-xzfs8-master-2?timeout=10s - context deadline exceeded
2026-04-19T14:28:49Z node/zdipgj23-5e9ca-xzfs8-master-0 - reason/FailedToUpdateLease https://api-int.zdipgj23-5e9ca.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/zdipgj23-5e9ca-xzfs8-master-0?timeout=10s - read tcp 172.16.0.14:41824->172.16.0.5:6443: read: connection reset by peer
2026-04-19T14:28:50Z node/zdipgj23-5e9ca-xzfs8-master-2 - reason/FailedToUpdateLease https://api-int.zdipgj23-5e9ca.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/zdipgj23-5e9ca-xzfs8-master-2?timeout=10s - write tcp 172.16.0.17:46052->172.16.0.5:6443: write: connection reset by peer
#2045855701082836992junit28 hours ago
2026-04-19T14:29:37Z node/zdipgj23-5e9ca-xzfs8-master-2 - reason/FailedToUpdateLease https://api-int.zdipgj23-5e9ca.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/zdipgj23-5e9ca-xzfs8-master-2?timeout=10s - http2: client connection lost
2026-04-19T14:31:34Z node/zdipgj23-5e9ca-xzfs8-master-0 - reason/FailedToUpdateLease https://api-int.zdipgj23-5e9ca.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/zdipgj23-5e9ca-xzfs8-master-0?timeout=10s - read tcp 172.16.0.14:49794->172.16.0.5:6443: read: connection reset by peer
2026-04-19T15:15:54Z node/zdipgj23-5e9ca-xzfs8-master-0 - reason/FailedToUpdateLease https://api-int.zdipgj23-5e9ca.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/zdipgj23-5e9ca-xzfs8-master-0?timeout=10s - context deadline exceeded
#2045855701082836992junit28 hours ago
2026-04-19T15:16:02Z node/zdipgj23-5e9ca-xzfs8-master-2 - reason/FailedToUpdateLease https://api-int.zdipgj23-5e9ca.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/zdipgj23-5e9ca-xzfs8-master-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T15:16:31Z node/zdipgj23-5e9ca-xzfs8-master-0 - reason/FailedToUpdateLease https://api-int.zdipgj23-5e9ca.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/zdipgj23-5e9ca-xzfs8-master-0?timeout=10s - read tcp 172.16.0.14:37746->172.16.0.5:6443: read: connection reset by peer
2026-04-19T15:16:41Z node/zdipgj23-5e9ca-xzfs8-master-1 - reason/FailedToUpdateLease https://api-int.zdipgj23-5e9ca.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/zdipgj23-5e9ca-xzfs8-master-1?timeout=10s - context deadline exceeded
2026-04-19T15:16:48Z node/zdipgj23-5e9ca-xzfs8-worker-0-j4ck7 - reason/FailedToUpdateLease https://api-int.zdipgj23-5e9ca.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/zdipgj23-5e9ca-xzfs8-worker-0-j4ck7?timeout=10s - read tcp 172.16.0.15:39342->172.16.0.5:6443: read: connection reset by peer
2026-04-19T15:16:48Z node/zdipgj23-5e9ca-xzfs8-worker-0-j4ck7 - reason/FailedToUpdateLease https://api-int.zdipgj23-5e9ca.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/zdipgj23-5e9ca-xzfs8-worker-0-j4ck7?timeout=10s - read tcp 172.16.0.15:39380->172.16.0.5:6443: read: connection reset by peer
2026-04-19T15:16:53Z node/zdipgj23-5e9ca-xzfs8-master-1 - reason/FailedToUpdateLease https://api-int.zdipgj23-5e9ca.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/zdipgj23-5e9ca-xzfs8-master-1?timeout=10s - context deadline exceeded
#2045855701082836992junit28 hours ago
2026-04-19T15:17:08Z node/zdipgj23-5e9ca-xzfs8-master-1 - reason/FailedToUpdateLease https://api-int.zdipgj23-5e9ca.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/zdipgj23-5e9ca-xzfs8-master-1?timeout=10s - http2: client connection lost
2026-04-19T15:17:09Z node/zdipgj23-5e9ca-xzfs8-worker-0-mh47v - reason/FailedToUpdateLease https://api-int.zdipgj23-5e9ca.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/zdipgj23-5e9ca-xzfs8-worker-0-mh47v?timeout=10s - read tcp 172.16.0.16:44658->172.16.0.5:6443: read: connection reset by peer
2026-04-19T15:18:45Z node/zdipgj23-5e9ca-xzfs8-worker-0-st49f - reason/FailedToUpdateLease https://api-int.zdipgj23-5e9ca.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/zdipgj23-5e9ca-xzfs8-worker-0-st49f?timeout=10s - http2: client connection force closed via ClientConn.Close
periodic-ci-openshift-multiarch-main-nightly-4.22-upgrade-from-stable-4.21-ocp-e2e-upgrade-gcp-ovn-multi-a-a (all) - 7 runs, 57% failed, 25% of failures match = 14% impact
#2045844290415890432junit28 hours ago
# [sig-node] Pod InPlace Resize Container burstable pods - 1 container with all requests & limits set and resize policy cpu & mem restart resizing requests & limits in opposite directions
fail [k8s.io/kubernetes/test/e2e/common/node/pod_resize.go:850]: failed to patch pod for resize: Patch "https://api.ci-op-h1b8gg39-73dc4.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/e2e-pod-resize-tests-4836/pods/resize-test-wnq7b/resize": read tcp 10.130.43.240:33896->34.95.120.24:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.21-e2e-vsphere-ovn-csi (all) - 6 runs, 33% failed, 50% of failures match = 17% impact
#2045873655971319808junit29 hours ago
2026-04-19T16:34:03Z node/ci-op-tp4l4p9q-2e847-g56d6-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-tp4l4p9q-2e847.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-tp4l4p9q-2e847-g56d6-master-2?timeout=10s - read tcp 10.93.157.100:42504->10.93.157.4:6443: read: connection reset by peer
2026-04-19T16:34:13Z node/ci-op-tp4l4p9q-2e847-g56d6-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-tp4l4p9q-2e847.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-tp4l4p9q-2e847-g56d6-master-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045873655971319808junit29 hours ago
2026-04-19T16:34:43Z node/ci-op-tp4l4p9q-2e847-g56d6-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-tp4l4p9q-2e847.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-tp4l4p9q-2e847-g56d6-master-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T16:35:32Z node/ci-op-tp4l4p9q-2e847-g56d6-worker-0-5r8qp - reason/FailedToUpdateLease https://api-int.ci-op-tp4l4p9q-2e847.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-tp4l4p9q-2e847-g56d6-worker-0-5r8qp?timeout=10s - read tcp 10.93.157.105:35138->10.93.157.4:6443: read: connection reset by peer
2026-04-19T16:35:33Z node/ci-op-tp4l4p9q-2e847-g56d6-worker-0-5r8qp - reason/FailedToUpdateLease https://api-int.ci-op-tp4l4p9q-2e847.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-tp4l4p9q-2e847-g56d6-worker-0-5r8qp?timeout=10s - read tcp 10.93.157.105:54686->10.93.157.4:6443: read: connection reset by peer
2026-04-19T16:35:41Z node/ci-op-tp4l4p9q-2e847-g56d6-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-tp4l4p9q-2e847.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-tp4l4p9q-2e847-g56d6-master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T16:35:42Z node/ci-op-tp4l4p9q-2e847-g56d6-worker-0-c27pk - reason/FailedToUpdateLease https://api-int.ci-op-tp4l4p9q-2e847.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-tp4l4p9q-2e847-g56d6-worker-0-c27pk?timeout=10s - read tcp 10.93.157.104:38846->10.93.157.4:6443: read: connection reset by peer
2026-04-19T16:35:43Z node/ci-op-tp4l4p9q-2e847-g56d6-worker-0-c27pk - reason/FailedToUpdateLease https://api-int.ci-op-tp4l4p9q-2e847.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-tp4l4p9q-2e847-g56d6-worker-0-c27pk?timeout=10s - read tcp 10.93.157.104:39752->10.93.157.4:6443: read: connection reset by peer
2026-04-19T16:36:42Z node/ci-op-tp4l4p9q-2e847-g56d6-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-tp4l4p9q-2e847.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-tp4l4p9q-2e847-g56d6-master-1?timeout=10s - read tcp 10.93.157.101:48922->10.93.157.4:6443: read: connection reset by peer
2026-04-19T16:36:44Z node/ci-op-tp4l4p9q-2e847-g56d6-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-tp4l4p9q-2e847.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-tp4l4p9q-2e847-g56d6-master-1?timeout=10s - read tcp 10.93.157.101:45164->10.93.157.4:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-5.0-metal-ovn-network-flow-matrix-bm (all) - 2 runs, 0% failed, 50% of runs match
#2045850026432794624junit29 hours ago
2026-04-19T14:25:53Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:54944->192.168.111.5:6443: read: connection reset by peer
2026-04-19T14:25:59Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:45080->192.168.111.5:6443: read: connection reset by peer
2026-04-19T14:25:59Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:36812->192.168.111.5:6443: read: connection reset by peer
2026-04-19T14:26:11Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-upgrade-out-of-change (all) - 7 runs, 29% failed, 50% of failures match = 14% impact
#2045833461586989056junit30 hours ago
    StdOut>
    E0419 14:27:10.817375   83270 v2.go:167] "Unhandled Error" err="next reader: read tcp 172.24.130.88:53562->54.241.157.209:6443: read: connection reset by peer"
    E0419 14:27:10.817379   83270 v2.go:129] "Unhandled Error" err="next reader: read tcp 172.24.130.88:53562->54.241.157.209:6443: read: connection reset by peer"
    E0419 14:27:10.817415   83270 v2.go:150] "Unhandled Error" err="next reader: read tcp 172.24.130.88:53562->54.241.157.209:6443: read: connection reset by peer"
    error: error reading from error stream: next reader: read tcp 172.24.130.88:53562->54.241.157.209:6443: read: connection reset by peer
    StdErr>
    E0419 14:27:10.817375   83270 v2.go:167] "Unhandled Error" err="next reader: read tcp 172.24.130.88:53562->54.241.157.209:6443: read: connection reset by peer"
    E0419 14:27:10.817379   83270 v2.go:129] "Unhandled Error" err="next reader: read tcp 172.24.130.88:53562->54.241.157.209:6443: read: connection reset by peer"
    E0419 14:27:10.817415   83270 v2.go:150] "Unhandled Error" err="next reader: read tcp 172.24.130.88:53562->54.241.157.209:6443: read: connection reset by peer"
    error: error reading from error stream: next reader: read tcp 172.24.130.88:53562->54.241.157.209:6443: read: connection reset by peer
    exit status 1
periodic-ci-openshift-release-main-ci-4.12-e2e-aws-ovn-serial (all) - 2 runs, 0% failed, 50% of runs match
#2045855700554354688junit30 hours ago
Apr 19 14:36:14.789 E ns/e2e-volumelimits-5539-2891 pod/csi-hostpathplugin-0 node/ip-10-0-167-56.ec2.internal uid/f65308fd-39c6-4448-9152-21aecbe2d9f6 container/csi-resizer reason/ContainerExit code/2 cause/Error
Apr 19 14:56:34.658 E clusteroperator/network condition/Degraded status/True reason/ApplyOperatorConfig changed: Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s: Patch "https://api-int.ci-op-y0ygppqj-d7194.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-network-diagnostics/roles/prometheus-k8s?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.148.122:40580->10.0.141.146:6443: read: connection reset by peer
Apr 19 14:56:34.658 - 39s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s: Patch "https://api-int.ci-op-y0ygppqj-d7194.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-network-diagnostics/roles/prometheus-k8s?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.148.122:40580->10.0.141.146:6443: read: connection reset by peer
Apr 19 14:57:11.018 - 998ms E disruption/cache-kube-api connection/reused reason/DisruptionBegan disruption/cache-kube-api connection/reused stopped responding to GET requests over reused connections: Get "https://api.ci-op-y0ygppqj-d7194.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/default?resourceVersion=0": net/http: timeout awaiting response headers
Apr 19 15:01:14.617 E clusteroperator/network condition/Degraded status/True reason/ApplyOperatorConfig changed: Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-multus-networkpolicy: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-multus-networkpolicy: Patch "https://api-int.ci-op-y0ygppqj-d7194.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/openshift-multus-networkpolicy?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.148.122:37438->10.0.141.146:6443: read: connection reset by peer
Apr 19 15:01:14.617 - 39s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-multus-networkpolicy: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-multus-networkpolicy: Patch "https://api-int.ci-op-y0ygppqj-d7194.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/openshift-multus-networkpolicy?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.148.122:37438->10.0.141.146:6443: read: connection reset by peer
Apr 19 15:01:45.365 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator network is degraded
#2045855700554354688junit30 hours ago
Apr 19 14:56:34.658 - 39s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s: Patch "https://api-int.ci-op-y0ygppqj-d7194.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-network-diagnostics/roles/prometheus-k8s?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.148.122:40580->10.0.141.146:6443: read: connection reset by peer
Apr 19 15:01:14.617 - 39s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-multus-networkpolicy: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-multus-networkpolicy: Patch "https://api-int.ci-op-y0ygppqj-d7194.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/openshift-multus-networkpolicy?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.148.122:37438->10.0.141.146:6443: read: connection reset by peer
periodic-ci-shiftstack-ci-release-4.22-e2e-openstack-csi-manila (all) - 1 runs, 0% failed, 100% of runs match
#2045844128343789568junit30 hours ago
2026-04-19T14:44:04Z node/xk3w07wt-68b80-grx4r-master-2 - reason/FailedToUpdateLease https://api-int.xk3w07wt-68b80.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xk3w07wt-68b80-grx4r-master-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T14:44:30Z node/xk3w07wt-68b80-grx4r-master-2 - reason/FailedToUpdateLease https://api-int.xk3w07wt-68b80.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xk3w07wt-68b80-grx4r-master-2?timeout=10s - read tcp 10.0.2.21:37310->10.0.0.5:6443: read: connection reset by peer
2026-04-19T14:46:27Z node/xk3w07wt-68b80-grx4r-worker-0-4dvjl - reason/FailedToUpdateLease https://api-int.xk3w07wt-68b80.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xk3w07wt-68b80-grx4r-worker-0-4dvjl?timeout=10s - http2: client connection force closed via ClientConn.Close
2026-04-19T14:46:27Z node/xk3w07wt-68b80-grx4r-worker-0-bzqrl - reason/FailedToUpdateLease https://api-int.xk3w07wt-68b80.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xk3w07wt-68b80-grx4r-worker-0-bzqrl?timeout=10s - read tcp 10.0.3.44:56472->10.0.0.5:6443: read: connection reset by peer
2026-04-19T14:46:30Z node/xk3w07wt-68b80-grx4r-worker-0-bnvct - reason/FailedToUpdateLease https://api-int.xk3w07wt-68b80.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xk3w07wt-68b80-grx4r-worker-0-bnvct?timeout=10s - read tcp 10.0.2.87:50202->10.0.0.5:6443: read: connection reset by peer
2026-04-19T14:46:34Z node/xk3w07wt-68b80-grx4r-worker-0-4dvjl - reason/FailedToUpdateLease https://api-int.xk3w07wt-68b80.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xk3w07wt-68b80-grx4r-worker-0-4dvjl?timeout=10s - read tcp 10.0.0.240:55944->10.0.0.5:6443: read: connection reset by peer
2026-04-19T14:46:34Z node/xk3w07wt-68b80-grx4r-worker-0-bnvct - reason/FailedToUpdateLease https://api-int.xk3w07wt-68b80.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xk3w07wt-68b80-grx4r-worker-0-bnvct?timeout=10s - read tcp 10.0.2.87:50528->10.0.0.5:6443: read: connection reset by peer
2026-04-19T14:46:34Z node/xk3w07wt-68b80-grx4r-master-2 - reason/FailedToUpdateLease https://api-int.xk3w07wt-68b80.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xk3w07wt-68b80-grx4r-master-2?timeout=10s - read tcp 10.0.2.21:60018->10.0.0.5:6443: read: connection reset by peer
2026-04-19T14:46:36Z node/xk3w07wt-68b80-grx4r-worker-0-bzqrl - reason/FailedToUpdateLease https://api-int.xk3w07wt-68b80.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xk3w07wt-68b80-grx4r-worker-0-bzqrl?timeout=10s - read tcp 10.0.3.44:58340->10.0.0.5:6443: read: connection reset by peer
2026-04-19T14:46:36Z node/xk3w07wt-68b80-grx4r-worker-0-bzqrl - reason/FailedToUpdateLease https://api-int.xk3w07wt-68b80.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xk3w07wt-68b80-grx4r-worker-0-bzqrl?timeout=10s - read tcp 10.0.3.44:59330->10.0.0.5:6443: read: connection reset by peer
2026-04-19T14:46:37Z node/xk3w07wt-68b80-grx4r-worker-0-4dvjl - reason/FailedToUpdateLease https://api-int.xk3w07wt-68b80.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xk3w07wt-68b80-grx4r-worker-0-4dvjl?timeout=10s - read tcp 10.0.0.240:57044->10.0.0.5:6443: read: connection reset by peer
2026-04-19T14:46:37Z node/xk3w07wt-68b80-grx4r-worker-0-bnvct - reason/FailedToUpdateLease https://api-int.xk3w07wt-68b80.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xk3w07wt-68b80-grx4r-worker-0-bnvct?timeout=10s - read tcp 10.0.2.87:37416->10.0.0.5:6443: read: connection reset by peer
2026-04-19T14:46:44Z node/xk3w07wt-68b80-grx4r-master-2 - reason/FailedToUpdateLease https://api-int.xk3w07wt-68b80.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/xk3w07wt-68b80-grx4r-master-2?timeout=10s - context deadline exceeded (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-nightly-4.22-e2e-gcp-ovn-rt (all) - 7 runs, 14% failed, 100% of failures match = 14% impact
#2045833447330549760junit30 hours ago
    <*url.Error | 0xc009d30b40>:
    Post "https://api.ci-op-lc7ptpil-437c9.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/authorization.openshift.io/v1/namespaces/e2e-test-project-api-2wll7/rolebindings": read tcp 10.128.176.70:41096->34.36.149.23:6443: read: connection reset by peer
    {
periodic-ci-openshift-release-main-nightly-4.20-e2e-agent-ha-dualstack-conformance (all) - 4 runs, 50% failed, 50% of failures match = 25% impact
#2045834984626851840junit30 hours ago
2026-04-19T14:17:50Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.82:52324->192.168.111.5:6443: read: connection reset by peer
2026-04-19T14:18:11Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-nightly-5.0-e2e-metal-ovn-two-node-fencing-ipv6-recovery (all) - 4 runs, 50% failed, 50% of failures match = 25% impact
#2045809418276179968junit31 hours ago
2026-04-19T12:26:59Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - context deadline exceeded
2026-04-19T12:46:02Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::15]:39864->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-19T13:15:55Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-nightly-4.20-upgrade-from-stable-4.19-e2e-metal-ipi-upgrade-ovn-ipv6 (all) - 6 runs, 0% failed, 33% of runs match
#2045811544331128832junit31 hours ago
2026-04-19T13:13:42Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on [fd2e:6f44:5dd8:c956::1]:53: no such host
2026-04-19T13:13:55Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::14]:41728->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-19T13:13:55Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::17]:41440->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-19T13:19:52Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
#2045811544331128832junit31 hours ago
2026-04-19T13:13:42Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on [fd2e:6f44:5dd8:c956::1]:53: no such host
2026-04-19T13:13:55Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::14]:41728->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-19T13:13:55Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::17]:41440->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-19T13:19:52Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
#2045600180476055552junit44 hours ago
2026-04-18T23:20:31Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on [fd2e:6f44:5dd8:c956::1]:53: no such host
2026-04-18T23:20:43Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::17]:38232->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-18T23:20:46Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::18]:54148->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-18T23:20:46Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::14]:50240->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
#2045600180476055552junit44 hours ago
2026-04-18T23:20:31Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on [fd2e:6f44:5dd8:c956::1]:53: no such host
2026-04-18T23:20:43Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::17]:38232->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-18T23:20:46Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::18]:54148->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-18T23:20:46Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::14]:50240->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.19-e2e-aws-serial-runc (all) - 1 runs, 0% failed, 100% of runs match
#2045821726750674944junit31 hours ago
Apr 19 14:09:54.300 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role: Patch "https://api-int.ci-op-4c0cvfyf-484c9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/openshift-network-public-role?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.86.155:47404->10.0.71.137:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 14:09:54.300 - 27s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role: Patch "https://api-int.ci-op-4c0cvfyf-484c9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/openshift-network-public-role?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.86.155:47404->10.0.71.137:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 14:10:21.504 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
Apr 19 14:13:48.002 E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited: Patch "https://api-int.ci-op-4c0cvfyf-484c9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-ovn-kubernetes/roles/openshift-ovn-kubernetes-control-plane-limited?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.86.155:53068->10.0.71.137:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 14:13:48.002 - 28s   E clusteroperator/network condition/Degraded reason/ApplyOperatorConfig status/True Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited: Patch "https://api-int.ci-op-4c0cvfyf-484c9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-ovn-kubernetes/roles/openshift-ovn-kubernetes-control-plane-limited?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.86.155:53068->10.0.71.137:6443: read: connection reset by peer (exception: https://issues.redhat.com/browse/OCPBUGS-38684)
Apr 19 14:14:16.422 W clusteroperator/network condition/Degraded status/False (exception: Degraded=False is the happy case)
#2045821726750674944junit31 hours ago
2026-04-19T11:58:20Z node/ip-10-0-4-245.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-4c0cvfyf-484c9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-4-245.ec2.internal?timeout=10s - read tcp 10.0.4.245:57492->10.0.71.137:6443: read: connection reset by peer
2026-04-19T11:58:34Z node/ip-10-0-89-164.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-4c0cvfyf-484c9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-164.ec2.internal?timeout=10s - read tcp 10.0.89.164:51590->10.0.35.169:6443: read: connection reset by peer
2026-04-19T11:58:48Z node/ip-10-0-86-155.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-4c0cvfyf-484c9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-86-155.ec2.internal?timeout=10s - read tcp 10.0.86.155:51754->10.0.35.169:6443: read: connection reset by peer
2026-04-19T13:39:14Z node/ip-10-0-89-164.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-4c0cvfyf-484c9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-164.ec2.internal?timeout=10s - context deadline exceeded
periodic-ci-openshift-release-main-ci-4.13-e2e-aws-sdn-serial (all) - 3 runs, 0% failed, 33% of runs match
#2045825095384961024junit32 hours ago
Apr 19 13:22:13.636 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-169-115.ec2.internal node/ip-10-0-169-115.ec2.internal uid/7643e487-f7da-44de-84a0-b3d54e1d3cf2 invariant violation: init container statuses were removed
Apr 19 13:24:49.748 E clusteroperator/network condition/Degraded status/True reason/ApplyOperatorConfig changed: Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni: Patch "https://api-int.ci-op-5m3x5k1h-c020c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-multus/roles/whereabouts-cni?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.169.115:50012->10.0.166.26:6443: read: connection reset by peer
Apr 19 13:24:49.748 - 35s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni: Patch "https://api-int.ci-op-5m3x5k1h-c020c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-multus/roles/whereabouts-cni?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.169.115:50012->10.0.166.26:6443: read: connection reset by peer
Apr 19 13:25:59.709 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-191-46.ec2.internal node/ip-10-0-191-46.ec2.internal uid/2ca9855d-c12b-44f0-bc28-f37e575e7d07 container/kube-apiserver-cert-syncer reason/ContainerExit code/2 cause/Error rue} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0419 13:15:01.816320       1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nI0419 13:15:01.816594       1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0419 13:15:02.614802       1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nI0419 13:15:02.615132       1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
#2045825095384961024junit32 hours ago
Apr 19 13:24:49.748 - 35s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni: Patch "https://api-int.ci-op-5m3x5k1h-c020c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-multus/roles/whereabouts-cni?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.169.115:50012->10.0.166.26:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.19-e2e-vsphere-ovn-etcd-scaling (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2045829780833570816junit32 hours ago
2026-04-19T12:39:38Z node/ci-op-trzvlfrg-05225-hxtsr-worker-0-dcxkv - reason/FailedToUpdateLease https://api-int.ci-op-trzvlfrg-05225.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-trzvlfrg-05225-hxtsr-worker-0-dcxkv?timeout=10s - read tcp 10.93.251.69:32984->10.93.251.16:6443: read: connection reset by peer
2026-04-19T12:39:39Z node/ci-op-trzvlfrg-05225-hxtsr-worker-0-pch47 - reason/FailedToUpdateLease https://api-int.ci-op-trzvlfrg-05225.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-trzvlfrg-05225-hxtsr-worker-0-pch47?timeout=10s - read tcp 10.93.251.68:41728->10.93.251.16:6443: read: connection reset by peer
2026-04-19T12:39:42Z node/ci-op-trzvlfrg-05225-hxtsr-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-trzvlfrg-05225.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-trzvlfrg-05225-hxtsr-master-2?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045829780833570816junit32 hours ago
2026-04-19T12:40:15Z node/ci-op-trzvlfrg-05225-hxtsr-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-trzvlfrg-05225.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-trzvlfrg-05225-hxtsr-master-2?timeout=10s - http2: client connection lost
2026-04-19T12:56:12Z node/ci-op-trzvlfrg-05225-hxtsr-worker-0-pch47 - reason/FailedToUpdateLease https://api-int.ci-op-trzvlfrg-05225.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-trzvlfrg-05225-hxtsr-worker-0-pch47?timeout=10s - read tcp 10.93.251.68:55920->10.93.251.16:6443: read: connection reset by peer
2026-04-19T12:56:14Z node/ci-op-trzvlfrg-05225-hxtsr-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-trzvlfrg-05225.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-trzvlfrg-05225-hxtsr-master-2?timeout=10s - read tcp 10.93.251.66:60586->10.93.251.16:6443: read: connection reset by peer
2026-04-19T12:58:58Z node/ci-op-trzvlfrg-05225-hxtsr-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-trzvlfrg-05225.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-trzvlfrg-05225-hxtsr-master-2?timeout=10s - context deadline exceeded
periodic-ci-openshift-release-main-ci-5.0-upgrade-from-stable-4.22-e2e-azure-ovn-upgrade (all) - 80 runs, 38% failed, 3% of failures match = 1% impact
#2045790167507144704junit32 hours ago
    container_use_devices --> off
    E0419 12:06:18.581247   92624 v2.go:167] "Unhandled Error" err="next reader: read tcp 172.24.37.173:34488->172.184.149.136:6443: read: connection reset by peer"
    E0419 12:06:18.581266   92624 v2.go:129] "Unhandled Error" err="next reader: read tcp 172.24.37.173:34488->172.184.149.136:6443: read: connection reset by peer"
    E0419 12:06:18.581301   92624 v2.go:150] "Unhandled Error" err="next reader: read tcp 172.24.37.173:34488->172.184.149.136:6443: read: connection reset by peer"
    error: error reading from error stream: next reader: read tcp 172.24.37.173:34488->172.184.149.136:6443: read: connection reset by peer
    StdErr>
    container_use_devices --> off
    E0419 12:06:18.581247   92624 v2.go:167] "Unhandled Error" err="next reader: read tcp 172.24.37.173:34488->172.184.149.136:6443: read: connection reset by peer"
    E0419 12:06:18.581266   92624 v2.go:129] "Unhandled Error" err="next reader: read tcp 172.24.37.173:34488->172.184.149.136:6443: read: connection reset by peer"
    E0419 12:06:18.581301   92624 v2.go:150] "Unhandled Error" err="next reader: read tcp 172.24.37.173:34488->172.184.149.136:6443: read: connection reset by peer"
    error: error reading from error stream: next reader: read tcp 172.24.37.173:34488->172.184.149.136:6443: read: connection reset by peer
    exit status 1
periodic-ci-openshift-release-main-nightly-4.20-e2e-aws-ovn-cgroupsv2 (all) - 6 runs, 0% failed, 33% of runs match
#2045811486428762112junit32 hours ago
2026-04-19T11:15:28Z node/ip-10-0-84-36.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-1yk1dkfz-c6cae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-84-36.us-west-1.compute.internal?timeout=10s - read tcp 10.0.84.36:57156->10.0.42.140:6443: read: connection reset by peer
#2045600194640220160junit46 hours ago
2026-04-18T21:14:18Z node/ip-10-0-37-59.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-wkwt0z3b-c6cae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-37-59.us-west-1.compute.internal?timeout=10s - read tcp 10.0.37.59:58756->10.0.16.90:6443: read: connection reset by peer
2026-04-18T21:14:22Z node/ip-10-0-119-179.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-wkwt0z3b-c6cae.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-119-179.us-west-1.compute.internal?timeout=10s - read tcp 10.0.119.179:53252->10.0.125.215:6443: read: connection reset by peer
periodic-ci-shiftstack-ci-release-4.19-upgrade-from-stable-4.18-e2e-openstack-ovn-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2045779196805910528junit32 hours ago
2026-04-19T09:50:05Z node/5c3l491v-bd723-pbgwp-worker-0-jh6jt - reason/FailedToUpdateLease https://api-int.5c3l491v-bd723.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/5c3l491v-bd723-pbgwp-worker-0-jh6jt?timeout=10s - read tcp 10.0.1.9:53264->10.0.0.5:6443: read: connection reset by peer
2026-04-19T09:50:05Z node/5c3l491v-bd723-pbgwp-worker-0-frjml - reason/FailedToUpdateLease https://api-int.5c3l491v-bd723.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/5c3l491v-bd723-pbgwp-worker-0-frjml?timeout=10s - read tcp 10.0.1.193:47194->10.0.0.5:6443: read: connection reset by peer
2026-04-19T09:50:08Z node/5c3l491v-bd723-pbgwp-worker-0-mf855 - reason/FailedToUpdateLease https://api-int.5c3l491v-bd723.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/5c3l491v-bd723-pbgwp-worker-0-mf855?timeout=10s - read tcp 10.0.1.205:44362->10.0.0.5:6443: read: connection reset by peer
2026-04-19T09:50:10Z node/5c3l491v-bd723-pbgwp-master-2 - reason/FailedToUpdateLease https://api-int.5c3l491v-bd723.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/5c3l491v-bd723-pbgwp-master-2?timeout=10s - read tcp 10.0.2.12:48624->10.0.0.5:6443: read: connection reset by peer
2026-04-19T10:10:17Z node/5c3l491v-bd723-pbgwp-master-1 - reason/FailedToUpdateLease https://api-int.5c3l491v-bd723.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/5c3l491v-bd723-pbgwp-master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045779196805910528junit32 hours ago
2026-04-19T10:58:25Z node/5c3l491v-bd723-pbgwp-master-1 - reason/FailedToUpdateLease https://api-int.5c3l491v-bd723.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/5c3l491v-bd723-pbgwp-master-1?timeout=10s - dial tcp: lookup api-int.5c3l491v-bd723.shiftstack.devcluster.openshift.com on 1.1.1.1:53: no such host
2026-04-19T10:58:26Z node/5c3l491v-bd723-pbgwp-master-2 - reason/FailedToUpdateLease https://api-int.5c3l491v-bd723.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/5c3l491v-bd723-pbgwp-master-2?timeout=10s - read tcp 10.0.2.12:38392->10.0.0.5:6443: read: connection reset by peer
2026-04-19T10:58:27Z node/5c3l491v-bd723-pbgwp-worker-0-frjml - reason/FailedToUpdateLease https://api-int.5c3l491v-bd723.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/5c3l491v-bd723-pbgwp-worker-0-frjml?timeout=10s - read tcp 10.0.1.193:60362->10.0.0.5:6443: read: connection reset by peer
2026-04-19T11:06:06Z node/5c3l491v-bd723-pbgwp-master-0 - reason/FailedToUpdateLease https://api-int.5c3l491v-bd723.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/5c3l491v-bd723-pbgwp-master-0?timeout=10s - dial tcp: lookup api-int.5c3l491v-bd723.shiftstack.devcluster.openshift.com on 1.1.1.1:53: no such host
periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-fencing-recovery-1of3 (all) - 4 runs, 100% failed, 50% of failures match = 50% impact
#2045809401754816512junit33 hours ago
2026-04-19T11:59:12Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:56120->192.168.111.5:6443: read: connection reset by peer
2026-04-19T12:05:15Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045598000436219904junit47 hours ago
2026-04-18T21:47:15Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:39506->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-5.0-e2e-metal-ovn-two-node-fencing-dualstack-recovery (all) - 4 runs, 0% failed, 50% of runs match
#2045764096837554176junit34 hours ago
2026-04-19T09:01:08Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - context deadline exceeded
2026-04-19T09:21:09Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.21:51174->192.168.111.5:6443: read: connection reset by peer
2026-04-19T09:56:21Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - context deadline exceeded
#2045552701718138880junit2 days ago
2026-04-18T20:15:16Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-18T20:44:47Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp 192.168.111.21:55922->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.17-upgrade-from-stable-4.16-e2e-vsphere-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#2045795814311202816junit34 hours ago
2026-04-19T10:58:45Z node/ci-op-7szl5mc0-af028-n22h4-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-7szl5mc0-af028.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-7szl5mc0-af028-n22h4-master-2?timeout=10s - dial tcp 10.93.251.4:6443: connect: connection refused
2026-04-19T10:58:53Z node/ci-op-7szl5mc0-af028-n22h4-worker-0-tw422 - reason/FailedToUpdateLease https://api-int.ci-op-7szl5mc0-af028.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-7szl5mc0-af028-n22h4-worker-0-tw422?timeout=10s - read tcp 10.93.251.29:38136->10.93.251.4:6443: read: connection reset by peer
2026-04-19T10:58:54Z node/ci-op-7szl5mc0-af028-n22h4-worker-0-2mxxv - reason/FailedToUpdateLease https://api-int.ci-op-7szl5mc0-af028.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-7szl5mc0-af028-n22h4-worker-0-2mxxv?timeout=10s - read tcp 10.93.251.28:55190->10.93.251.4:6443: read: connection reset by peer
2026-04-19T10:58:56Z node/ci-op-7szl5mc0-af028-n22h4-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-7szl5mc0-af028.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-7szl5mc0-af028-n22h4-master-2?timeout=10s - write tcp 10.93.251.24:48378->10.93.251.4:6443: write: connection reset by peer
2026-04-19T10:58:57Z node/ci-op-7szl5mc0-af028-n22h4-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-7szl5mc0-af028.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-7szl5mc0-af028-n22h4-master-0?timeout=10s - read tcp 10.93.251.23:49272->10.93.251.4:6443: read: connection reset by peer
2026-04-19T10:59:04Z node/ci-op-7szl5mc0-af028-n22h4-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-7szl5mc0-af028.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-7szl5mc0-af028-n22h4-master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-two-node-fencing-ipv6-certrotation (all) - 3 runs, 67% failed, 50% of failures match = 33% impact
#2045767117894062080junit35 hours ago
2026-04-19T08:30:28Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
2026-04-19T08:30:31Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::15]:49580->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-19T08:30:31Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
#2045767117894062080junit35 hours ago
2026-04-19T08:30:31Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
2026-04-19T08:30:31Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::14]:36084->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-19T08:30:31Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv4-rhcos10-techpreview (all) - 4 runs, 25% failed, 100% of failures match = 25% impact
#2045750506688614400junit35 hours ago
2026-04-19T09:36:52Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:52506->192.168.111.5:6443: read: connection reset by peer
2026-04-19T09:37:09Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - context deadline exceeded
periodic-ci-openshift-release-main-nightly-4.19-e2e-aws-ovn-proxy (all) - 1 runs, 0% failed, 100% of runs match
#2045765103017529344junit36 hours ago
2026-04-19T08:15:29Z node/ip-10-0-74-8.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i2f4v8z9-02015.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-74-8.us-west-2.compute.internal?timeout=10s - read tcp 10.0.74.8:43120->10.0.76.206:6443: read: connection reset by peer
2026-04-19T08:15:32Z node/ip-10-0-60-221.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-i2f4v8z9-02015.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-60-221.us-west-2.compute.internal?timeout=10s - read tcp 10.0.60.221:44868->10.0.63.56:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.19-e2e-rosa-sts-ovn (all) - 17 runs, 94% failed, 6% of failures match = 6% impact
#2045741314376470528junit36 hours ago
2026-04-19T06:53:57Z node/ip-10-0-59-120.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-rosa-s-oh3n.0m9h.s1.devshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-59-120.us-west-2.compute.internal?timeout=10s - read tcp 10.0.59.120:35132->10.0.56.33:6443: read: connection reset by peer
2026-04-19T06:57:51Z node/ip-10-0-59-120.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-rosa-s-oh3n.0m9h.s1.devshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-59-120.us-west-2.compute.internal?timeout=10s - read tcp 10.0.59.120:35160->10.0.56.33:6443: read: connection reset by peer
2026-04-19T06:57:51Z node/ip-10-0-53-36.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-rosa-s-oh3n.0m9h.s1.devshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-36.us-west-2.compute.internal?timeout=10s - read tcp 10.0.53.36:56970->10.0.56.33:6443: read: connection reset by peer
2026-04-19T06:57:55Z node/ip-10-0-43-137.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-rosa-s-oh3n.0m9h.s1.devshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-43-137.us-west-2.compute.internal?timeout=10s - read tcp 10.0.43.137:60488->10.0.56.33:6443: read: connection reset by peer
2026-04-19T07:17:11Z node/ip-10-0-53-36.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-rosa-s-oh3n.0m9h.s1.devshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-36.us-west-2.compute.internal?timeout=10s - read tcp 10.0.53.36:44506->10.0.56.33:6443: read: connection reset by peer
2026-04-19T07:17:16Z node/ip-10-0-41-89.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-rosa-s-oh3n.0m9h.s1.devshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-41-89.us-west-2.compute.internal?timeout=10s - read tcp 10.0.41.89:56404->10.0.56.33:6443: read: connection reset by peer
periodic-ci-openshift-multiarch-main-nightly-4.22-ocp-e2e-aws-ovn-sno-multi-a-a (all) - 7 runs, 43% failed, 33% of failures match = 14% impact
#2045738286026067968junit37 hours ago
    <*fmt.wrapError | 0xc009bc02a0>:
    error reading from error stream: read message: read tcp 172.24.60.12:43604->54.67.17.118:6443: read: connection reset by peer
    {
        msg: "error reading from error stream: read message: read tcp 172.24.60.12:43604->54.67.17.118:6443: read: connection reset by peer",
        err: <*fmt.wrapError | 0xc009bc0280>{
            msg: "read message: read tcp 172.24.60.12:43604->54.67.17.118:6443: read: connection reset by peer",
            err: <*net.OpError | 0xc001797b80>{
periodic-ci-openshift-cluster-kube-apiserver-operator-release-4.23-periodics-e2e-metal-event-ttl (all) - 2 runs, 0% failed, 50% of runs match
#2045717034741796864junit38 hours ago
2026-04-19T05:18:00Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::14]:49000->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-19T05:18:14Z node/master-2.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2.ostest.test.metalkube.org?timeout=10s - context deadline exceeded
periodic-ci-openshift-cluster-network-operator-release-4.20-e2e-aws-ovn-clusternetwork-cidr-expansion (all) - 1 runs, 0% failed, 100% of runs match
#2045698982277025792junit39 hours ago
2026-04-19T04:03:46Z node/ip-10-0-5-197.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsjfjyd-8ee17.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-197.us-west-2.compute.internal?timeout=10s - read tcp 10.0.5.197:36882->10.0.47.215:6443: read: connection reset by peer
2026-04-19T04:03:51Z node/ip-10-0-35-59.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsjfjyd-8ee17.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-59.us-west-2.compute.internal?timeout=10s - read tcp 10.0.35.59:48636->10.0.91.236:6443: read: connection reset by peer
2026-04-19T04:07:33Z node/ip-10-0-46-253.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsjfjyd-8ee17.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-46-253.us-west-2.compute.internal?timeout=10s - read tcp 10.0.46.253:48900->10.0.91.236:6443: read: connection reset by peer
2026-04-19T04:07:41Z node/ip-10-0-5-197.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsjfjyd-8ee17.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-5-197.us-west-2.compute.internal?timeout=10s - read tcp 10.0.5.197:47820->10.0.47.215:6443: read: connection reset by peer
2026-04-19T04:08:04Z node/ip-10-0-10-145.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsjfjyd-8ee17.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-10-145.us-west-2.compute.internal?timeout=10s - read tcp 10.0.10.145:44370->10.0.47.215:6443: read: connection reset by peer
2026-04-19T04:20:22Z node/ip-10-0-35-59.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsjfjyd-8ee17.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-59.us-west-2.compute.internal?timeout=10s - read tcp 10.0.35.59:55604->10.0.47.215:6443: read: connection reset by peer
2026-04-19T04:24:18Z node/ip-10-0-110-184.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsjfjyd-8ee17.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-110-184.us-west-2.compute.internal?timeout=10s - read tcp 10.0.110.184:53196->10.0.91.236:6443: read: connection reset by peer
2026-04-19T04:24:19Z node/ip-10-0-7-216.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsjfjyd-8ee17.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-7-216.us-west-2.compute.internal?timeout=10s - read tcp 10.0.7.216:60022->10.0.91.236:6443: read: connection reset by peer
2026-04-19T04:24:27Z node/ip-10-0-35-59.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsjfjyd-8ee17.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-59.us-west-2.compute.internal?timeout=10s - read tcp 10.0.35.59:34844->10.0.47.215:6443: read: connection reset by peer
2026-04-19T04:28:10Z node/ip-10-0-81-29.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsjfjyd-8ee17.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-81-29.us-west-2.compute.internal?timeout=10s - read tcp 10.0.81.29:50194->10.0.91.236:6443: read: connection reset by peer
2026-04-19T04:28:19Z node/ip-10-0-10-145.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsjfjyd-8ee17.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-10-145.us-west-2.compute.internal?timeout=10s - read tcp 10.0.10.145:40096->10.0.91.236:6443: read: connection reset by peer
2026-04-19T04:28:39Z node/ip-10-0-46-253.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ivsjfjyd-8ee17.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-46-253.us-west-2.compute.internal?timeout=10s - read tcp 10.0.46.253:58850->10.0.47.215:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-arbiter-upgrade-day-2-workers (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#2045706214439915520junit39 hours ago
2026-04-19T05:59:43Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - read tcp 192.168.111.22:54382->192.168.111.5:6443: read: connection reset by peer
2026-04-19T05:59:44Z node/extraworker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/extraworker-0?timeout=10s - read tcp 192.168.111.23:59454->192.168.111.5:6443: read: connection reset by peer
2026-04-19T05:59:50Z node/extraworker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/extraworker-1?timeout=10s - read tcp 192.168.111.24:46338->192.168.111.5:6443: read: connection reset by peer
2026-04-19T05:59:51Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - read tcp 192.168.111.21:37948->192.168.111.5:6443: read: connection reset by peer
periodic-ci-shiftstack-ci-release-4.16-e2e-openstack-dualstack (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2045705208348020736junit39 hours ago
2026-04-19T04:52:14Z node/4r6v2k68-e809e-g88j5-worker-0-cf4x4 - reason/FailedToUpdateLease https://api-int.4r6v2k68-e809e.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/4r6v2k68-e809e-g88j5-worker-0-cf4x4?timeout=10s - http2: client connection force closed via ClientConn.Close
2026-04-19T04:52:20Z node/4r6v2k68-e809e-g88j5-master-1 - reason/FailedToUpdateLease https://api-int.4r6v2k68-e809e.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/4r6v2k68-e809e-g88j5-master-1?timeout=10s - read tcp 192.168.25.185:60634->192.168.25.43:6443: read: connection reset by peer
2026-04-19T04:52:28Z node/4r6v2k68-e809e-g88j5-master-2 - reason/FailedToUpdateLease https://api-int.4r6v2k68-e809e.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/4r6v2k68-e809e-g88j5-master-2?timeout=10s - write tcp 192.168.25.65:45714->192.168.25.43:6443: write: connection reset by peer
#2045705208348020736junit39 hours ago
2026-04-19T04:54:00Z node/4r6v2k68-e809e-g88j5-master-0 - reason/FailedToUpdateLease https://api-int.4r6v2k68-e809e.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/4r6v2k68-e809e-g88j5-master-0?timeout=10s - http2: client connection lost
2026-04-19T05:03:05Z node/4r6v2k68-e809e-g88j5-master-0 - reason/FailedToUpdateLease https://api-int.4r6v2k68-e809e.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/4r6v2k68-e809e-g88j5-master-0?timeout=10s - read tcp 192.168.25.145:38050->192.168.25.43:6443: read: connection reset by peer
2026-04-19T05:03:05Z node/4r6v2k68-e809e-g88j5-master-1 - reason/FailedToUpdateLease https://api-int.4r6v2k68-e809e.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/4r6v2k68-e809e-g88j5-master-1?timeout=10s - read tcp 192.168.25.185:50154->192.168.25.43:6443: read: connection reset by peer
2026-04-19T05:03:15Z node/4r6v2k68-e809e-g88j5-master-2 - reason/FailedToUpdateLease https://api-int.4r6v2k68-e809e.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/4r6v2k68-e809e-g88j5-master-2?timeout=10s - context deadline exceeded
periodic-ci-openshift-release-main-nightly-5.0-e2e-metal-ipi-ovn-ipv6-techpreview (all) - 8 runs, 13% failed, 100% of failures match = 13% impact
#2045668270953992192junit40 hours ago
2026-04-19T03:27:14Z node/master-2.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2.ostest.test.metalkube.org?timeout=10s - http2: client connection lost
2026-04-19T04:35:14Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::14]:52282->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-19T04:35:26Z node/worker-2.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2.ostest.test.metalkube.org?timeout=10s - context deadline exceeded
#2045668270953992192junit40 hours ago
2026-04-19T04:48:06Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
2026-04-19T04:48:07Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::17]:35100->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer (Client.Timeout exceeded while awaiting headers)
2026-04-19T04:48:08Z node/master-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1.ostest.test.metalkube.org?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
#2045668270953992192junit40 hours ago
2026-04-19T05:09:34Z node/worker-2.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2.ostest.test.metalkube.org?timeout=10s - http2: client connection force closed via ClientConn.Close
2026-04-19T05:09:34Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::17]:37950->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-19T05:09:45Z node/master-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0.ostest.test.metalkube.org?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-nightly-4.20-e2e-rosa-sts-ovn (all) - 11 runs, 100% failed, 9% of failures match = 9% impact
#2045679492118089728junit40 hours ago
2026-04-19T02:55:21Z node/ip-10-0-6-14.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-rosa-s-ufyy.ds3y.s1.devshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-14.us-west-2.compute.internal?timeout=10s - read tcp 10.0.6.14:40952->10.0.49.185:6443: read: connection reset by peer
2026-04-19T03:02:32Z node/ip-10-0-18-30.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-rosa-s-ufyy.ds3y.s1.devshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-18-30.us-west-2.compute.internal?timeout=10s - read tcp 10.0.18.30:41670->10.0.49.185:6443: read: connection reset by peer
2026-04-19T03:07:25Z node/ip-10-0-1-181.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-rosa-s-ufyy.ds3y.s1.devshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-1-181.us-west-2.compute.internal?timeout=10s - read tcp 10.0.1.181:36726->10.0.49.185:6443: read: connection reset by peer
2026-04-19T03:07:28Z node/ip-10-0-18-30.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-rosa-s-ufyy.ds3y.s1.devshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-18-30.us-west-2.compute.internal?timeout=10s - read tcp 10.0.18.30:41512->10.0.49.185:6443: read: connection reset by peer
2026-04-19T03:08:31Z node/ip-10-0-12-58.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-rosa-s-ufyy.ds3y.s1.devshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-12-58.us-west-2.compute.internal?timeout=10s - read tcp 10.0.12.58:47040->10.0.49.185:6443: read: connection reset by peer
2026-04-19T03:08:34Z node/ip-10-0-20-196.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-rosa-s-ufyy.ds3y.s1.devshift.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-20-196.us-west-2.compute.internal?timeout=10s - read tcp 10.0.20.196:43420->10.0.49.185:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-5.0-e2e-metal-ipi-ovn-techpreview (all) - 8 runs, 25% failed, 50% of failures match = 13% impact
#2045668149310787584junit40 hours ago
2026-04-19T03:31:31Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - context deadline exceeded
2026-04-19T03:31:40Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:60180->192.168.111.5:6443: read: connection reset by peer
2026-04-19T03:31:43Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:44110->192.168.111.5:6443: read: connection reset by peer
2026-04-19T03:31:43Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:56260->192.168.111.5:6443: read: connection reset by peer
2026-04-19T03:31:44Z node/worker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0?timeout=10s - read tcp 192.168.111.23:52860->192.168.111.5:6443: read: connection reset by peer
2026-04-19T03:31:52Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - context deadline exceeded
periodic-ci-openshift-release-main-nightly-4.23-upgrade-from-stable-4.22-e2e-metal-ovn-two-node-arbiter-upgrade-day-2-workers (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2045654224515108864junit40 hours ago
2026-04-19T03:05:49Z node/extraworker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/extraworker-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T03:05:54Z node/extraworker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/extraworker-1?timeout=10s - read tcp 192.168.111.24:57180->192.168.111.5:6443: read: connection reset by peer
2026-04-19T03:05:54Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - read tcp 192.168.111.22:60380->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-hypershift-release-4.19-periodics-e2e-kubevirt-metal-ovn (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2045661418165899264junit41 hours ago
2026-04-19T03:05:10Z node/4519e71bc5cdbefe87b7-kqmcr-djhdw - reason/FailedToUpdateLease https://192.168.111.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/4519e71bc5cdbefe87b7-kqmcr-djhdw?timeout=10s - read tcp 10.128.0.127:56002->192.168.111.32:6443: read: connection reset by peer
2026-04-19T03:05:20Z node/4519e71bc5cdbefe87b7-kqmcr-hfr7r - reason/FailedToUpdateLease https://192.168.111.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/4519e71bc5cdbefe87b7-kqmcr-hfr7r?timeout=10s - read tcp 10.130.0.127:55580->192.168.111.32:6443: read: connection reset by peer
2026-04-19T03:05:31Z node/4519e71bc5cdbefe87b7-kqmcr-zvmhv - reason/FailedToUpdateLease https://192.168.111.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/4519e71bc5cdbefe87b7-kqmcr-zvmhv?timeout=10s - context deadline exceeded
periodic-ci-openshift-release-main-nightly-4.20-upgrade-from-stable-4.19-e2e-metal-ipi-ovn-upgrade-runc (all) - 1 runs, 0% failed, 100% of runs match
#2045654093640241152junit41 hours ago
2026-04-19T03:07:50Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
2026-04-19T03:08:00Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::17]:42756->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-19T03:08:00Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::18]:50610->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-19T03:08:04Z node/master-2.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::16]:35592->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
#2045654093640241152junit41 hours ago
2026-04-19T03:07:50Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
2026-04-19T03:08:00Z node/worker-0.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-0.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::17]:42756->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-19T03:08:00Z node/worker-1.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::18]:50610->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
2026-04-19T03:08:04Z node/master-2.ostest.test.metalkube.org - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2.ostest.test.metalkube.org?timeout=10s - read tcp [fd2e:6f44:5dd8:c956::16]:35592->[fd2e:6f44:5dd8:c956::5]:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.19-e2e-vsphere-runc-upgrade (all) - 2 runs, 0% failed, 50% of runs match
#2045690360365060096junit41 hours ago
2026-04-19T04:24:00Z node/ci-op-fd1mzqqv-69284-257l2-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-fd1mzqqv-69284.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-fd1mzqqv-69284-257l2-master-0?timeout=10s - read tcp 10.93.251.114:55630->10.93.251.16:6443: read: connection reset by peer
2026-04-19T04:24:01Z node/ci-op-fd1mzqqv-69284-257l2-worker-0-9lww6 - reason/FailedToUpdateLease https://api-int.ci-op-fd1mzqqv-69284.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-fd1mzqqv-69284-257l2-worker-0-9lww6?timeout=10s - read tcp 10.93.251.121:60878->10.93.251.16:6443: read: connection reset by peer
2026-04-19T04:24:02Z node/ci-op-fd1mzqqv-69284-257l2-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-fd1mzqqv-69284.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-fd1mzqqv-69284-257l2-master-1?timeout=10s - write tcp 10.93.251.115:40504->10.93.251.16:6443: write: connection reset by peer
2026-04-19T04:24:06Z node/ci-op-fd1mzqqv-69284-257l2-worker-0-62vnd - reason/FailedToUpdateLease https://api-int.ci-op-fd1mzqqv-69284.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-fd1mzqqv-69284-257l2-worker-0-62vnd?timeout=10s - read tcp 10.93.251.120:46168->10.93.251.16:6443: read: connection reset by peer
2026-04-19T04:24:15Z node/ci-op-fd1mzqqv-69284-257l2-master-2 - reason/FailedToUpdateLease https://api-int.ci-op-fd1mzqqv-69284.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-fd1mzqqv-69284-257l2-master-2?timeout=10s - context deadline exceeded
periodic-ci-openshift-release-main-nightly-4.22-upgrade-from-stable-4.21-e2e-metal-ovn-two-node-arbiter-upgrade-day-2-workers (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2045654164104548352junit41 hours ago
2026-04-19T02:18:37Z node/extraworker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/extraworker-1?timeout=10s - read tcp 192.168.111.24:54556->192.168.111.5:6443: read: connection reset by peer
2026-04-19T02:19:08Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - http2: client connection force closed via ClientConn.Close (Client.Timeout exceeded while awaiting headers)
#2045654164104548352junit41 hours ago
2026-04-19T02:37:50Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - EOF
2026-04-19T02:37:50Z node/extraworker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/extraworker-0?timeout=10s - read tcp 192.168.111.23:52796->192.168.111.5:6443: read: connection reset by peer
2026-04-19T02:37:50Z node/extraworker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/extraworker-0?timeout=10s - context deadline exceeded (Client.Timeout exceeded while awaiting headers)
#2045654164104548352junit41 hours ago
2026-04-19T02:37:59Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T03:49:56Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - read tcp 192.168.111.22:55250->192.168.111.5:6443: read: connection reset by peer
2026-04-19T03:50:01Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - write tcp 192.168.111.21:37344->192.168.111.5:6443: write: connection reset by peer
2026-04-19T03:50:02Z node/extraworker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/extraworker-0?timeout=10s - read tcp 192.168.111.23:47884->192.168.111.5:6443: read: connection reset by peer
2026-04-19T03:50:04Z node/extraworker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/extraworker-1?timeout=10s - context deadline exceeded
2026-04-19T03:50:04Z node/extraworker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/extraworker-1?timeout=10s - read tcp 192.168.111.24:37474->192.168.111.5:6443: read: connection reset by peer
2026-04-19T03:50:06Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - context deadline exceeded
#2045654164104548352junit41 hours ago
2026-04-19T03:51:55Z node/extraworker-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/extraworker-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2026-04-19T03:59:49Z node/arbiter-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/arbiter-0?timeout=10s - read tcp 192.168.111.22:49732->192.168.111.5:6443: read: connection reset by peer
2026-04-19T03:59:49Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - write tcp 192.168.111.20:41520->192.168.111.5:6443: write: connection reset by peer
periodic-ci-openshift-release-main-ci-4.20-e2e-aws-ovn (all) - 6 runs, 17% failed, 100% of failures match = 17% impact
#2045679401697284096junit41 hours ago
2026-04-19T02:30:10Z node/ip-10-0-53-133.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-gqrn62gi-28061.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-53-133.us-west-2.compute.internal?timeout=10s - read tcp 10.0.53.133:60220->10.0.30.151:6443: read: connection reset by peer
2026-04-19T02:30:20Z node/ip-10-0-18-55.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-gqrn62gi-28061.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-18-55.us-west-2.compute.internal?timeout=10s - read tcp 10.0.18.55:39418->10.0.113.57:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.20-e2e-aws-ovn-serial-1of2 (all) - 8 runs, 25% failed, 50% of failures match = 13% impact
#2045679533226463232junit41 hours ago
2026-04-19T02:31:29Z node/ip-10-0-36-101.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-qkpf52hz-2713f.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-101.ec2.internal?timeout=10s - read tcp 10.0.36.101:45220->10.0.9.69:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.19-upgrade-from-stable-4.18-e2e-aws-runc-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2045654010601410560junit41 hours ago
2026-04-19T00:50:21Z node/ip-10-0-51-128.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-rxdm3rzi-9475c.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-128.ec2.internal?timeout=10s - read tcp 10.0.51.128:42500->10.0.124.49:6443: read: connection reset by peer
2026-04-19T00:50:26Z node/ip-10-0-16-167.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-rxdm3rzi-9475c.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-167.ec2.internal?timeout=10s - read tcp 10.0.16.167:34722->10.0.124.49:6443: read: connection reset by peer
2026-04-19T00:50:53Z node/ip-10-0-113-158.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-rxdm3rzi-9475c.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-113-158.ec2.internal?timeout=10s - read tcp 10.0.113.158:38100->10.0.62.77:6443: read: connection reset by peer
2026-04-19T01:30:16Z node/ip-10-0-6-221.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-rxdm3rzi-9475c.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-6-221.ec2.internal?timeout=10s - read tcp 10.0.6.221:59244->10.0.62.77:6443: read: connection reset by peer
2026-04-19T01:30:22Z node/ip-10-0-113-158.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-rxdm3rzi-9475c.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-113-158.ec2.internal?timeout=10s - read tcp 10.0.113.158:40436->10.0.124.49:6443: read: connection reset by peer
2026-04-19T01:34:40Z node/ip-10-0-16-167.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-rxdm3rzi-9475c.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-167.ec2.internal?timeout=10s - read tcp 10.0.16.167:60914->10.0.62.77:6443: read: connection reset by peer
2026-04-19T02:12:52Z node/ip-10-0-51-128.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-rxdm3rzi-9475c.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-128.ec2.internal?timeout=10s - read tcp 10.0.51.128:34396->10.0.124.49:6443: read: connection reset by peer
2026-04-19T02:13:19Z node/ip-10-0-76-194.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-rxdm3rzi-9475c.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-76-194.ec2.internal?timeout=10s - read tcp 10.0.76.194:33570->10.0.62.77:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.18-upgrade-from-stable-4.17-e2e-metal-ipi-ovn-upgrade-cgroupsv1 (all) - 1 runs, 0% failed, 100% of runs match
#2045654050019479552junit42 hours ago
2026-04-19T02:28:52Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-19T02:28:59Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:46240->192.168.111.5:6443: read: connection reset by peer
2026-04-19T02:28:59Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - write tcp 192.168.111.20:45480->192.168.111.5:6443: write: connection reset by peer
2026-04-19T02:29:00Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:57914->192.168.111.5:6443: read: connection reset by peer
2026-04-19T02:29:04Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:32992->192.168.111.5:6443: read: connection reset by peer
#2045654050019479552junit42 hours ago
2026-04-19T02:28:52Z node/master-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-19T02:28:59Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:46240->192.168.111.5:6443: read: connection reset by peer
2026-04-19T02:28:59Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - write tcp 192.168.111.20:45480->192.168.111.5:6443: write: connection reset by peer
2026-04-19T02:29:00Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:57914->192.168.111.5:6443: read: connection reset by peer
2026-04-19T02:29:04Z node/worker-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-2?timeout=10s - read tcp 192.168.111.25:32992->192.168.111.5:6443: read: connection reset by peer
periodic-ci-openshift-ovn-kubernetes-release-4.19-e2e-aws-ovn-shared-to-local-gateway-mode-migration-periodic (all) - 1 runs, 0% failed, 100% of runs match
#2045669471791616000junit42 hours ago
2026-04-19T02:05:33Z node/ip-10-0-106-74.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-874zbrxp-0aa0c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-106-74.ec2.internal?timeout=10s - read tcp 10.0.106.74:58048->10.0.64.242:6443: read: connection reset by peer
2026-04-19T02:05:38Z node/ip-10-0-123-18.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-874zbrxp-0aa0c.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-123-18.ec2.internal?timeout=10s - read tcp 10.0.123.18:50932->10.0.64.242:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.18-upgrade-from-stable-4.17-e2e-aws-crun-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#2045653988770058240junit42 hours ago
2026-04-19T01:01:08Z node/ip-10-0-48-6.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-flhpijfp-1db8e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-48-6.us-west-2.compute.internal?timeout=10s - read tcp 10.0.48.6:44130->10.0.35.180:6443: read: connection reset by peer
2026-04-19T01:04:58Z node/ip-10-0-35-10.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-flhpijfp-1db8e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-10.us-west-2.compute.internal?timeout=10s - read tcp 10.0.35.10:43958->10.0.35.180:6443: read: connection reset by peer
2026-04-19T01:05:23Z node/ip-10-0-48-6.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-flhpijfp-1db8e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-48-6.us-west-2.compute.internal?timeout=10s - read tcp 10.0.48.6:56324->10.0.115.42:6443: read: connection reset by peer
2026-04-19T01:59:06Z node/ip-10-0-27-76.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-flhpijfp-1db8e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-27-76.us-west-2.compute.internal?timeout=10s - read tcp 10.0.27.76:54522->10.0.35.180:6443: read: connection reset by peer
#2045653988770058240junit42 hours ago
2026-04-19T01:01:08Z node/ip-10-0-48-6.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-flhpijfp-1db8e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-48-6.us-west-2.compute.internal?timeout=10s - read tcp 10.0.48.6:44130->10.0.35.180:6443: read: connection reset by peer
2026-04-19T01:04:58Z node/ip-10-0-35-10.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-flhpijfp-1db8e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-35-10.us-west-2.compute.internal?timeout=10s - read tcp 10.0.35.10:43958->10.0.35.180:6443: read: connection reset by peer
2026-04-19T01:05:23Z node/ip-10-0-48-6.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-flhpijfp-1db8e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-48-6.us-west-2.compute.internal?timeout=10s - read tcp 10.0.48.6:56324->10.0.115.42:6443: read: connection reset by peer
2026-04-19T01:59:06Z node/ip-10-0-27-76.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-flhpijfp-1db8e.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-27-76.us-west-2.compute.internal?timeout=10s - read tcp 10.0.27.76:54522->10.0.35.180:6443: read: connection reset by peer
periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-aws-ovn-conformance-serial (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#2045668805677420544junit42 hours ago
2026-04-19T02:24:39Z node/ip-10-0-132-93.ec2.internal - reason/FailedToUpdateLease https://acd066a89ebd24fab9e6af6ad95043ea-57e4c87644904baa.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-132-93.ec2.internal?timeout=10s - read tcp 10.0.132.93:45062->100.51.18.243:6443: read: connection reset by peer
2026-04-19T02:24:48Z node/ip-10-0-129-111.ec2.internal - reason/FailedToUpdateLease https://acd066a89ebd24fab9e6af6ad95043ea-57e4c87644904baa.elb.us-east-1.amazonaws.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-129-111.ec2.internal?timeout=10s - read tcp 10.0.129.111:36460->52.23.13.118:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-5.0-upgrade-from-stable-4.22-e2e-vsphere-ovn-upgrade (all) - 8 runs, 13% failed, 100% of failures match = 13% impact
#2045668142679592960junit42 hours ago
2026-04-19T02:23:59Z node/ci-op-1l6qiz7i-fb57f-f6htm-master-0 - reason/FailedToUpdateLease https://api-int.ci-op-1l6qiz7i-fb57f.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-1l6qiz7i-fb57f-f6htm-master-0?timeout=10s - context deadline exceeded
2026-04-19T02:56:21Z node/ci-op-1l6qiz7i-fb57f-f6htm-master-1 - reason/FailedToUpdateLease https://api-int.ci-op-1l6qiz7i-fb57f.vmc-ci.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-op-1l6qiz7i-fb57f-f6htm-master-1?timeout=10s - read tcp 10.95.160.114:54292->10.95.160.16:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.20-e2e-aws-overlay-mtu-ovn-5000 (all) - 1 runs, 0% failed, 100% of runs match
#2045654065118973952junit42 hours ago
2026-04-19T00:49:12Z node/ip-10-0-120-136.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kc9kb070-b61e9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-120-136.us-west-2.compute.internal?timeout=10s - read tcp 10.0.120.136:36340->10.0.8.166:6443: read: connection reset by peer
2026-04-19T00:49:13Z node/ip-10-0-3-23.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kc9kb070-b61e9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-3-23.us-west-2.compute.internal?timeout=10s - read tcp 10.0.3.23:36420->10.0.93.158:6443: read: connection reset by peer
2026-04-19T00:49:19Z node/ip-10-0-76-105.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-kc9kb070-b61e9.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-76-105.us-west-2.compute.internal?timeout=10s - read tcp 10.0.76.105:53724->10.0.8.166:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-5.0-e2e-external-aws-ccm (all) - 1 runs, 0% failed, 100% of runs match
#2045654255548764160junit42 hours ago
2026-04-19T01:03:00Z node/ip-10-0-67-187 - reason/FailedToUpdateLease https://api-int.ci-op-v5x0qr4w-f4da2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-67-187?timeout=10s - read tcp 10.0.67.187:45906->10.0.57.121:6443: read: connection reset by peer
2026-04-19T01:03:08Z node/ip-10-0-49-88 - reason/FailedToUpdateLease https://api-int.ci-op-v5x0qr4w-f4da2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-49-88?timeout=10s - read tcp 10.0.49.88:48844->10.0.57.121:6443: read: connection reset by peer
2026-04-19T01:03:18Z node/ip-10-0-49-222 - reason/FailedToUpdateLease https://api-int.ci-op-v5x0qr4w-f4da2.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-49-222?timeout=10s - read tcp 10.0.49.222:39174->10.0.57.121:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-5.0-e2e-external-aws (all) - 1 runs, 0% failed, 100% of runs match
#2045654253078319104junit43 hours ago
2026-04-19T00:57:54Z node/ip-10-0-54-37 - reason/FailedToUpdateLease https://api-int.ci-op-hv5w1lwb-33554.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-54-37?timeout=10s - read tcp 10.0.54.37:56210->10.0.50.37:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.22-e2e-external-aws-ccm (all) - 1 runs, 0% failed, 100% of runs match
#2045654154029830144junit43 hours ago
2026-04-19T01:00:51Z node/ip-10-0-51-248 - reason/FailedToUpdateLease https://api-int.ci-op-vqsgbq6z-f51e2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-248?timeout=10s - read tcp 10.0.51.248:58986->10.0.63.155:6443: read: connection reset by peer
2026-04-19T01:00:51Z node/ip-10-0-48-142 - reason/FailedToUpdateLease https://api-int.ci-op-vqsgbq6z-f51e2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-48-142?timeout=10s - read tcp 10.0.48.142:41832->10.0.63.155:6443: read: connection reset by peer
2026-04-19T01:00:52Z node/ip-10-0-49-128 - reason/FailedToUpdateLease https://api-int.ci-op-vqsgbq6z-f51e2.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-49-128?timeout=10s - read tcp 10.0.49.128:53266->10.0.63.155:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.21-e2e-external-aws (all) - 1 runs, 0% failed, 100% of runs match
#2045654115484176384junit43 hours ago
2026-04-19T00:59:45Z node/ip-10-0-71-37 - reason/FailedToUpdateLease https://api-int.ci-op-gc8i7qzs-27f2f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-71-37?timeout=10s - read tcp 10.0.71.37:36464->10.0.59.19:6443: read: connection reset by peer
2026-04-19T00:59:58Z node/ip-10-0-56-203 - reason/FailedToUpdateLease https://api-int.ci-op-gc8i7qzs-27f2f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-56-203?timeout=10s - read tcp 10.0.56.203:57782->10.0.59.19:6443: read: connection reset by peer
2026-04-19T01:02:41Z node/ip-10-0-48-101 - reason/FailedToUpdateLease https://api-int.ci-op-gc8i7qzs-27f2f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-48-101?timeout=10s - read tcp 10.0.48.101:58630->10.0.59.19:6443: read: connection reset by peer
2026-04-19T01:02:41Z node/ip-10-0-56-203 - reason/FailedToUpdateLease https://api-int.ci-op-gc8i7qzs-27f2f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-56-203?timeout=10s - read tcp 10.0.56.203:44210->10.0.59.19:6443: read: connection reset by peer
2026-04-19T01:02:45Z node/ip-10-0-51-95 - reason/FailedToUpdateLease https://api-int.ci-op-gc8i7qzs-27f2f.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-51-95?timeout=10s - read tcp 10.0.51.95:57246->10.0.59.19:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.20-e2e-external-aws-ccm (all) - 1 runs, 0% failed, 100% of runs match
#2045654078536552448junit43 hours ago
2026-04-19T00:52:40Z node/ip-10-0-61-145 - reason/FailedToUpdateLease https://api-int.ci-op-9pci3dc0-1a627.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-145?timeout=10s - read tcp 10.0.61.145:60500->10.0.56.86:6443: read: connection reset by peer
2026-04-19T00:55:24Z node/ip-10-0-49-231 - reason/FailedToUpdateLease https://api-int.ci-op-9pci3dc0-1a627.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-49-231?timeout=10s - read tcp 10.0.49.231:50440->10.0.56.86:6443: read: connection reset by peer
2026-04-19T00:55:26Z node/ip-10-0-56-100 - reason/FailedToUpdateLease https://api-int.ci-op-9pci3dc0-1a627.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-56-100?timeout=10s - read tcp 10.0.56.100:58206->10.0.56.86:6443: read: connection reset by peer
2026-04-19T00:55:33Z node/ip-10-0-61-145 - reason/FailedToUpdateLease https://api-int.ci-op-9pci3dc0-1a627.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-145?timeout=10s - read tcp 10.0.61.145:50708->10.0.56.86:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.20-e2e-external-aws (all) - 1 runs, 0% failed, 100% of runs match
#2045654076309377024junit43 hours ago
2026-04-19T00:52:46Z node/ip-10-0-62-72 - reason/FailedToUpdateLease https://api-int.ci-op-n52pi55k-212b5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-62-72?timeout=10s - read tcp 10.0.62.72:56970->10.0.48.72:6443: read: connection reset by peer
2026-04-19T00:52:54Z node/ip-10-0-57-241 - reason/FailedToUpdateLease https://api-int.ci-op-n52pi55k-212b5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-57-241?timeout=10s - read tcp 10.0.57.241:46498->10.0.48.72:6443: read: connection reset by peer
2026-04-19T00:52:55Z node/ip-10-0-49-80 - reason/FailedToUpdateLease https://api-int.ci-op-n52pi55k-212b5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-49-80?timeout=10s - read tcp 10.0.49.80:59720->10.0.48.72:6443: read: connection reset by peer
2026-04-19T00:53:02Z node/ip-10-0-72-62 - reason/FailedToUpdateLease https://api-int.ci-op-n52pi55k-212b5.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-72-62?timeout=10s - read tcp 10.0.72.62:48600->10.0.48.72:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.17-upgrade-from-stable-4.16-e2e-aws-upgrade-ovn-single-node-network-flow-matrix (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#2045638611382046720junit43 hours ago
E0419 00:14:10.046958       1 errors.go:77] Post "https://api-int.ci-op-v1gt3cnc-630cc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.0.41.29:6443: connect: connection refused
E0419 00:14:10.517806       1 leaderelection.go:332] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: Get "https://api-int.ci-op-v1gt3cnc-630cc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler/leases/kube-scheduler?timeout=53.5s": dial tcp 10.0.81.135:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.20.237:57104->10.0.81.135:6443: read: connection reset by peer
E0419 00:14:29.176917       1 leaderelection.go:332] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: Get "https://api-int.ci-op-v1gt3cnc-630cc.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler/leases/kube-scheduler?timeout=53.5s": context deadline exceeded
periodic-ci-openshift-release-main-nightly-4.16-metal-ovn-network-flow-matrix-bm (all) - 2 runs, 0% failed, 50% of runs match
#2045638605518409728junit44 hours ago
2026-04-19T00:16:57Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:48038->192.168.111.5:6443: read: connection reset by peer
2026-04-19T00:17:00Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - write tcp 192.168.111.22:60634->192.168.111.5:6443: write: connection reset by peer
periodic-ci-openshift-release-main-ci-4.16-e2e-aws-ovn-upgrade (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#2045634231446540288junit44 hours ago
2026-04-18T23:35:55Z node/ip-10-0-61-111.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-54wly7gh-9fcfb.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-111.ec2.internal?timeout=10s - read tcp 10.0.61.111:34466->10.0.116.80:6443: read: connection reset by peer
#2045634231446540288junit44 hours ago
2026-04-18T23:35:55Z node/ip-10-0-61-111.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-54wly7gh-9fcfb.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-111.ec2.internal?timeout=10s - read tcp 10.0.61.111:34466->10.0.116.80:6443: read: connection reset by peer
periodic-ci-openshift-release-main-ci-4.12-upgrade-from-stable-4.11-e2e-aws-ovn-upgrade (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#2045632829097775104junit44 hours ago
Apr 18 23:20:02.701 E ns/openshift-etcd pod/etcd-ip-10-0-202-212.us-east-2.compute.internal node/ip-10-0-202-212.us-east-2.compute.internal uid/d4d69328-9a15-4437-80d3-4f85ea8f0689 invariant violation: container statuses were removed
Apr 18 23:24:37.868 E clusteroperator/network condition/Degraded status/True reason/ApplyOperatorConfig changed: Error while updating operator configuration: could not apply (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node: failed to apply / update (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node: Patch "https://api-int.ci-op-x2t3p58b-0667b.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-node?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.202.212:59296->10.0.206.77:6443: read: connection reset by peer
Apr 18 23:24:37.868 - 18s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node: failed to apply / update (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node: Patch "https://api-int.ci-op-x2t3p58b-0667b.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-node?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.202.212:59296->10.0.206.77:6443: read: connection reset by peer
Apr 18 23:30:38.403 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-5fd4569bbd-dvpp5 node/ip-10-0-202-212.us-east-2.compute.internal uid/82fced67-9a7f-4b01-bff3-cbaba71d2508 container/kube-controller-manager-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
#2045632829097775104junit44 hours ago
Apr 18 23:24:37.868 - 18s   E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node: failed to apply / update (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node: Patch "https://api-int.ci-op-x2t3p58b-0667b.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-node?fieldManager=cluster-network-operator%2Foperconfig&force=true": read tcp 10.0.202.212:59296->10.0.206.77:6443: read: connection reset by peer
Apr 19 00:14:55.563 - 2s    E clusteroperator/network condition/Degraded status/True reason/Error while updating operator configuration: could not apply (/v1, Kind=Service) openshift-multus/network-metrics-service: failed to get current state of (/v1, Kind=Service) openshift-multus/network-metrics-service: client rate limiter Wait returned an error: context canceled
periodic-ci-openshift-multiarch-main-nightly-4.19-ocp-e2e-upgrade-aws-ovn-multi-a-a (all) - 1 runs, 0% failed, 100% of runs match
#2045618384049016832junit44 hours ago
2026-04-18T22:40:30Z node/ip-10-0-116-98.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-xnns7smy-ebd01.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-116-98.ec2.internal?timeout=10s - read tcp 10.0.116.98:45480->10.0.104.106:6443: read: connection reset by peer
2026-04-18T22:40:34Z node/ip-10-0-47-207.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-xnns7smy-ebd01.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-47-207.ec2.internal?timeout=10s - read tcp 10.0.47.207:38670->10.0.61.133:6443: read: connection reset by peer
2026-04-18T22:40:36Z node/ip-10-0-90-235.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-xnns7smy-ebd01.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-235.ec2.internal?timeout=10s - read tcp 10.0.90.235:45020->10.0.61.133:6443: read: connection reset by peer
2026-04-18T22:44:30Z node/ip-10-0-90-235.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-xnns7smy-ebd01.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-90-235.ec2.internal?timeout=10s - read tcp 10.0.90.235:35384->10.0.104.106:6443: read: connection reset by peer
2026-04-18T22:44:32Z node/ip-10-0-104-238.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-xnns7smy-ebd01.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-104-238.ec2.internal?timeout=10s - read tcp 10.0.104.238:57092->10.0.104.106:6443: read: connection reset by peer
2026-04-18T22:48:26Z node/ip-10-0-89-213.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-xnns7smy-ebd01.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-89-213.ec2.internal?timeout=10s - read tcp 10.0.89.213:34048->10.0.61.133:6443: read: connection reset by peer
2026-04-18T23:33:06Z node/ip-10-0-52-46.ec2.internal - reason/FailedToUpdateLease https://api-int.ci-op-xnns7smy-ebd01.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-52-46.ec2.internal?timeout=10s - read tcp 10.0.52.46:38670->10.0.61.133:6443: read: connection reset by peer
periodic-ci-openshift-multiarch-main-nightly-4.19-ocp-e2e-ovn-serial-aws-multi-a-a (all) - 1 runs, 0% failed, 100% of runs match
#2045605297950560256junit45 hours ago
2026-04-18T22:54:29Z node/ip-10-0-27-233.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-klt38llc-fff3a.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-27-233.us-east-2.compute.internal?timeout=10s - read tcp 10.0.27.233:60206->10.0.32.25:6443: read: connection reset by peer
2026-04-18T23:02:19Z node/ip-10-0-27-233.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-klt38llc-fff3a.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-27-233.us-east-2.compute.internal?timeout=10s - read tcp 10.0.27.233:49312->10.0.32.25:6443: read: connection reset by peer
2026-04-18T23:07:08Z node/ip-10-0-119-165.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-klt38llc-fff3a.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-119-165.us-east-2.compute.internal?timeout=10s - read tcp 10.0.119.165:35106->10.0.112.177:6443: read: connection reset by peer
2026-04-18T23:07:10Z node/ip-10-0-64-109.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-klt38llc-fff3a.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-64-109.us-east-2.compute.internal?timeout=10s - read tcp 10.0.64.109:53328->10.0.32.25:6443: read: connection reset by peer
2026-04-18T23:11:02Z node/ip-10-0-43-241.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-klt38llc-fff3a.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-43-241.us-east-2.compute.internal?timeout=10s - read tcp 10.0.43.241:34776->10.0.112.177:6443: read: connection reset by peer
2026-04-18T23:11:11Z node/ip-10-0-27-233.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-klt38llc-fff3a.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-27-233.us-east-2.compute.internal?timeout=10s - read tcp 10.0.27.233:36738->10.0.32.25:6443: read: connection reset by peer
2026-04-18T23:11:11Z node/ip-10-0-27-233.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-klt38llc-fff3a.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-27-233.us-east-2.compute.internal?timeout=10s - read tcp 10.0.27.233:36796->10.0.32.25:6443: read: connection reset by peer
2026-04-18T23:11:13Z node/ip-10-0-16-184.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-klt38llc-fff3a.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-16-184.us-east-2.compute.internal?timeout=10s - read tcp 10.0.16.184:58068->10.0.32.25:6443: read: connection reset by peer
2026-04-18T23:34:24Z node/ip-10-0-64-109.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-klt38llc-fff3a.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-64-109.us-east-2.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-nightly-5.0-upgrade-from-stable-4.22-e2e-metal-ipi-ovn-upgrade (all) - 8 runs, 25% failed, 50% of failures match = 13% impact
#2045582090761670656junit45 hours ago
2026-04-18T21:56:32Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-18T23:36:57Z node/worker-1 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/worker-1?timeout=10s - read tcp 192.168.111.24:52672->192.168.111.5:6443: read: connection reset by peer
2026-04-18T23:37:00Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - read tcp 192.168.111.22:50754->192.168.111.5:6443: read: connection reset by peer
2026-04-18T23:37:07Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-multiarch-main-nightly-5.0-upgrade-from-stable-4.22-ocp-e2e-upgrade-gcp-ovn-multi-a-a (all) - 8 runs, 63% failed, 20% of failures match = 13% impact
#2045587514252595200junit46 hours ago
# [sig-storage] CSI Mock volume fsgroup policies CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default
fail [k8s.io/kubernetes/test/e2e/storage/drivers/csi.go:770]: deploying csi mock driver: create CSIDriver: Post "https://api.ci-op-qz4qpn6q-54ead.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/storage.k8s.io/v1/csidrivers": read tcp 10.130.43.121:36482->34.8.213.171:6443: read: connection reset by peer
periodic-ci-shiftstack-ci-release-4.16-e2e-openstack-csi-cinder (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2045600780223778816junit46 hours ago
2026-04-18T21:49:52Z node/y692zqvj-38d38-f4j7r-master-0 - reason/FailedToUpdateLease https://api-int.y692zqvj-38d38.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/y692zqvj-38d38-f4j7r-master-0?timeout=10s - http2: client connection lost
2026-04-18T22:47:46Z node/y692zqvj-38d38-f4j7r-worker-0-227cg - reason/FailedToUpdateLease https://api-int.y692zqvj-38d38.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/y692zqvj-38d38-f4j7r-worker-0-227cg?timeout=10s - read tcp 10.0.1.131:41782->10.0.0.5:6443: read: connection reset by peer
2026-04-18T22:47:58Z node/y692zqvj-38d38-f4j7r-master-1 - reason/FailedToUpdateLease https://api-int.y692zqvj-38d38.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/y692zqvj-38d38-f4j7r-master-1?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)
periodic-ci-openshift-release-main-nightly-4.21-upgrade-from-stable-4.20-e2e-metal-ipi-ovn-upgrade-network-flow-matrix (all) - 2 runs, 0% failed, 50% of runs match
#2045578212934684672junit46 hours ago
2026-04-18T21:38:37Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-18T21:46:17Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:40284->192.168.111.5:6443: read: connection reset by peer
2026-04-18T21:46:17Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - dial tcp 192.168.111.5:6443: connect: connection refused
#2045578212934684672junit46 hours ago
2026-04-18T21:38:37Z node/master-2 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s - dial tcp: lookup api-int.ostest.test.metalkube.org on 192.168.111.1:53: no such host
2026-04-18T21:46:17Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - read tcp 192.168.111.20:40284->192.168.111.5:6443: read: connection reset by peer
2026-04-18T21:46:17Z node/master-0 - reason/FailedToUpdateLease https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s - dial tcp 192.168.111.5:6443: connect: connection refused
periodic-ci-shiftstack-ci-release-4.18-e2e-openstack-proxy (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2045600782073466880junit46 hours ago
namespace/openshift-route-controller-manager pod/route-controller-manager-sa-dockercfg-dwl6f container/ - node/6ypgf211-cdbd9-7xxlz-master-2 reason/HttpClientConnectionLost unable to decode an event from the watch stream: http2: client connection lost
namespace/openshift-console pod/downloads-94df4d4b4-ls2nn uid/5c23b569-6c73-4c3a-835f-a838b42a0a4a container/ - node/6ypgf211-cdbd9-7xxlz-master-2 reason/HttpClientConnectionLost Get "https://api-int.6ypgf211-cdbd9.shiftstack.devcluster.openshift.com:6443/api/v1/namespaces/openshift-console/pods/downloads-94df4d4b4-ls2nn": http2: client connection lost - error from a previous attempt: read tcp 172.16.0.19:43760->172.16.0.5:6443: read: connection reset by peer
 - node/6ypgf211-cdbd9-7xxlz-master-2 reason/HttpClientConnectionLost unable to decode an event from the watch stream: http2: client connection lost
periodic-ci-openshift-release-main-nightly-4.16-e2e-aws-ovn-serial (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#2045580909394006016junit47 hours ago
2026-04-18T21:09:23Z node/ip-10-0-61-132.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ttfb655r-771ed.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-132.us-west-1.compute.internal?timeout=10s - read tcp 10.0.61.132:38860->10.0.43.27:6443: read: connection reset by peer
2026-04-18T21:13:13Z node/ip-10-0-109-73.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ttfb655r-771ed.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-109-73.us-west-1.compute.internal?timeout=10s - read tcp 10.0.109.73:33748->10.0.43.27:6443: read: connection reset by peer
2026-04-18T21:25:53Z node/ip-10-0-33-222.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-ttfb655r-771ed.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-33-222.us-west-1.compute.internal?timeout=10s - read tcp 10.0.33.222:51078->10.0.43.27:6443: read: connection reset by peer
periodic-ci-openshift-release-main-nightly-4.16-e2e-aws-sdn-upgrade (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#2045580911700873216junit47 hours ago
2026-04-18T20:48:25Z node/ip-10-0-38-227.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-066g3mcw-ba33e.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-38-227.us-west-2.compute.internal?timeout=10s - read tcp 10.0.38.227:44204->10.0.94.254:6443: read: connection reset by peer
#2045580911700873216junit47 hours ago
2026-04-18T20:48:25Z node/ip-10-0-38-227.us-west-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-066g3mcw-ba33e.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-38-227.us-west-2.compute.internal?timeout=10s - read tcp 10.0.38.227:44204->10.0.94.254:6443: read: connection reset by peer
periodic-ci-shiftstack-ci-release-4.17-e2e-openstack-ccpmso (all) - 1 runs, 0% failed, 100% of runs match
#2045568305195913216junit47 hours ago
2026-04-18T19:55:55Z node/0vdzd079-d2b75-zx4mt-worker-0-sbkhx - reason/FailedToUpdateLease https://api-int.0vdzd079-d2b75.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/0vdzd079-d2b75-zx4mt-worker-0-sbkhx?timeout=10s - read tcp 10.0.2.211:55444->10.0.0.5:6443: read: connection reset by peer
2026-04-18T19:55:56Z node/0vdzd079-d2b75-zx4mt-worker-0-96txt - reason/FailedToUpdateLease https://api-int.0vdzd079-d2b75.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/0vdzd079-d2b75-zx4mt-worker-0-96txt?timeout=10s - read tcp 10.0.1.245:56582->10.0.0.5:6443: read: connection reset by peer
2026-04-18T19:55:57Z node/0vdzd079-d2b75-zx4mt-master-crmkz-0 - reason/FailedToUpdateLease https://api-int.0vdzd079-d2b75.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/0vdzd079-d2b75-zx4mt-master-crmkz-0?timeout=10s - read tcp 10.0.0.253:60890->10.0.0.5:6443: read: connection reset by peer
2026-04-18T19:56:05Z node/0vdzd079-d2b75-zx4mt-worker-0-sbkhx - reason/FailedToUpdateLease https://api-int.0vdzd079-d2b75.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/0vdzd079-d2b75-zx4mt-worker-0-sbkhx?timeout=10s - read tcp 10.0.2.211:43950->10.0.0.5:6443: read: connection reset by peer
2026-04-18T19:56:07Z node/0vdzd079-d2b75-zx4mt-worker-0-96txt - reason/FailedToUpdateLease https://api-int.0vdzd079-d2b75.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/0vdzd079-d2b75-zx4mt-worker-0-96txt?timeout=10s - read tcp 10.0.1.245:58124->10.0.0.5:6443: read: connection reset by peer
2026-04-18T20:16:36Z node/0vdzd079-d2b75-zx4mt-worker-0-pqvd9 - reason/FailedToUpdateLease https://api-int.0vdzd079-d2b75.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/0vdzd079-d2b75-zx4mt-worker-0-pqvd9?timeout=10s - read tcp 10.0.2.220:55848->10.0.0.5:6443: read: connection reset by peer
2026-04-18T20:16:36Z node/0vdzd079-d2b75-zx4mt-master-8c9pw-1 - reason/FailedToUpdateLease https://api-int.0vdzd079-d2b75.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/0vdzd079-d2b75-zx4mt-master-8c9pw-1?timeout=10s - read tcp 10.0.1.34:42812->10.0.0.5:6443: read: connection reset by peer
2026-04-18T20:16:41Z node/0vdzd079-d2b75-zx4mt-master-crmkz-0 - reason/FailedToUpdateLease https://api-int.0vdzd079-d2b75.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/0vdzd079-d2b75-zx4mt-master-crmkz-0?timeout=10s - read tcp 10.0.0.253:51222->10.0.0.5:6443: read: connection reset by peer
2026-04-18T20:16:42Z node/0vdzd079-d2b75-zx4mt-worker-0-96txt - reason/FailedToUpdateLease https://api-int.0vdzd079-d2b75.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/0vdzd079-d2b75-zx4mt-worker-0-96txt?timeout=10s - read tcp 10.0.1.245:55650->10.0.0.5:6443: read: connection reset by peer
2026-04-18T20:16:47Z node/0vdzd079-d2b75-zx4mt-master-8c9pw-1 - reason/FailedToUpdateLease https://api-int.0vdzd079-d2b75.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/0vdzd079-d2b75-zx4mt-master-8c9pw-1?timeout=10s - read tcp 10.0.1.34:45902->10.0.0.5:6443: read: connection reset by peer
2026-04-18T20:37:02Z node/0vdzd079-d2b75-zx4mt-master-8c9pw-1 - reason/FailedToUpdateLease https://api-int.0vdzd079-d2b75.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/0vdzd079-d2b75-zx4mt-master-8c9pw-1?timeout=10s - read tcp 10.0.1.34:36474->10.0.0.5:6443: read: connection reset by peer
2026-04-18T20:37:02Z node/0vdzd079-d2b75-zx4mt-master-cvb8t-2 - reason/FailedToUpdateLease https://api-int.0vdzd079-d2b75.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/0vdzd079-d2b75-zx4mt-master-cvb8t-2?timeout=10s - read tcp 10.0.0.130:49732->10.0.0.5:6443: read: connection reset by peer
2026-04-18T20:37:03Z node/0vdzd079-d2b75-zx4mt-master-8c9pw-1 - reason/FailedToUpdateLease https://api-int.0vdzd079-d2b75.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/0vdzd079-d2b75-zx4mt-master-8c9pw-1?timeout=10s - read tcp 10.0.1.34:36602->10.0.0.5:6443: read: connection reset by peer
2026-04-18T20:37:03Z node/0vdzd079-d2b75-zx4mt-worker-0-pqvd9 - reason/FailedToUpdateLease https://api-int.0vdzd079-d2b75.shiftstack.devcluster.openshift.com:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/0vdzd079-d2b75-zx4mt-worker-0-pqvd9?timeout=10s - read tcp 10.0.2.220:41236->10.0.0.5:6443: read: connection reset by peer

... 5 lines not shown

periodic-ci-openshift-ovn-kubernetes-release-4.19-periodics-e2e-aws-kubevirt-ovn-techpreview (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2045592211948572672junit47 hours ago
2026-04-18T20:50:08Z node/ip-10-0-61-91.us-east-2.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-9rkn3c9b-53133.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-61-91.us-east-2.compute.internal?timeout=10s - read tcp 10.0.61.91:52392->10.0.41.178:6443: read: connection reset by peer
periodic-ci-openshift-multiarch-main-nightly-4.17-ocp-e2e-serial-aws-ovn-multi-x-ax (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#2045565032850264064junit2 days ago
2026-04-18T20:22:45Z node/ip-10-0-32-61.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-m6xx2jck-7c960.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-32-61.us-west-1.compute.internal?timeout=10s - read tcp 10.0.32.61:47388->10.0.110.252:6443: read: connection reset by peer
2026-04-18T20:31:39Z node/ip-10-0-119-118.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-m6xx2jck-7c960.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-119-118.us-west-1.compute.internal?timeout=10s - read tcp 10.0.119.118:49086->10.0.110.252:6443: read: connection reset by peer
2026-04-18T20:35:28Z node/ip-10-0-121-73.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-m6xx2jck-7c960.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-121-73.us-west-1.compute.internal?timeout=10s - read tcp 10.0.121.73:51660->10.0.110.252:6443: read: connection reset by peer
2026-04-18T20:53:19Z node/ip-10-0-36-58.us-west-1.compute.internal - reason/FailedToUpdateLease https://api-int.ci-op-m6xx2jck-7c960.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-10-0-36-58.us-west-1.compute.internal?timeout=10s - net/http: request canceled (Client.Timeout exceeded while awaiting headers)

Found in 1.59% of runs (6.22% of failures) across 37689 total runs and 8413 jobs (25.49% failed) in 1.048s - clear search | chart view - source code located on github