-
Notifications
You must be signed in to change notification settings - Fork 6.5k
dataproc.snippets.update_cluster_test: test_update_cluster failed #9445
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Looks like this issue is flaky. 😟 I'm going to leave this open and stop commenting. A human should fix and close this. When run at the same commit (a0b5ecf), this test passed in one build (Build Status, Sponge) and failed in another build (Build Status, Sponge). |
Oops! Looks like this issue is still flaky. It failed again. 😬 I reopened the issue, but a human will need to close it again. commit: 161f9cb Test outputTraceback (most recent call last): File "/workspace/dataproc/snippets/update_cluster_test.py", line 59, in setup_teardown operation.result() File "/workspace/dataproc/snippets/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/future/polling.py", line 261, in result raise self._exception google.api_core.exceptions.InvalidArgument: 400 The resource 'projects/python-docs-samples-tests/regions/us-central1/subnetworks/default' is not ready |
Oops! Looks like this issue is still flaky. It failed again. 😬 I reopened the issue, but a human will need to close it again. commit: 0bf03c7 Test outputTraceback (most recent call last): File "/workspace/dataproc/snippets/update_cluster_test.py", line 85, in test_update_cluster update_cluster.update_cluster(PROJECT_ID, REGION, CLUSTER_NAME, NEW_NUM_INSTANCES) File "/workspace/dataproc/snippets/update_cluster.py", line 65, in update_cluster updated_cluster = operation.result() File "/workspace/dataproc/snippets/.nox/py-3-7/lib/python3.7/site-packages/google/api_core/future/polling.py", line 261, in result raise self._exception google.api_core.exceptions.Cancelled: 499 Operation partially failed: - 3 of 3 instances could not be added to cluster, they were deleted - The resource 'projects/python-docs-samples-tests/regions/us-central1/subnetworks/default' is not ready - The resource 'projects/python-docs-samples-tests/regions/us-central1/subnetworks/default' is not ready - The resource 'projects/python-docs-samples-tests/regions/us-central1/subnetworks/default' is not ready |
Ooo, fun, a new to me error. I'll take a look. |
Closing this to see if #9666 helped |
Oops! Looks like this issue is still flaky. It failed again. 😬 I reopened the issue, but a human will need to close it again. commit: b7d8eb4 Test outputTraceback (most recent call last): File "/workspace/dataproc/snippets/update_cluster_test.py", line 94, in test_update_cluster update_cluster.update_cluster(PROJECT_ID, REGION, CLUSTER_NAME, NEW_NUM_INSTANCES) File "/workspace/dataproc/snippets/update_cluster.py", line 65, in update_cluster updated_cluster = operation.result() File "/workspace/dataproc/snippets/.nox/py-3-7/lib/python3.7/site-packages/google/api_core/future/polling.py", line 261, in result raise self._exception google.api_core.exceptions.Cancelled: 499 Operation partially failed: - 3 of 3 instances could not be added to cluster, they were deleted - The resource 'projects/python-docs-samples-tests/regions/us-central1/subnetworks/default' is not ready - The resource 'projects/python-docs-samples-tests/regions/us-central1/subnetworks/default' is not ready - The resource 'projects/python-docs-samples-tests/regions/us-central1/subnetworks/default' is not ready |
I think that #9764 should have fixed this |
Oops! Looks like this issue is still flaky. It failed again. 😬 I reopened the issue, but a human will need to close it again. commit: 6681669 Test outputTraceback (most recent call last): File "/workspace/dataproc/snippets/update_cluster_test.py", line 94, in test_update_cluster update_cluster.update_cluster(PROJECT_ID, REGION, CLUSTER_NAME, NEW_NUM_INSTANCES) File "/workspace/dataproc/snippets/update_cluster.py", line 65, in update_cluster updated_cluster = operation.result() File "/workspace/dataproc/snippets/.nox/py-3-7/lib/python3.7/site-packages/google/api_core/future/polling.py", line 261, in result raise self._exception google.api_core.exceptions.Cancelled: 499 Operation partially failed: - 3 of 3 instances could not be added to cluster, they were deleted - The resource 'projects/python-docs-samples-tests/regions/us-central1/subnetworks/default' is not ready - The resource 'projects/python-docs-samples-tests/regions/us-central1/subnetworks/default' is not ready - The resource 'projects/python-docs-samples-tests/regions/us-central1/subnetworks/default' is not ready |
@nicain checking in on this - can you remind me why we ended up reverting to |
Oops! Looks like this issue is still flaky. It failed again. 😬 I reopened the issue, but a human will need to close it again. commit: e419fcf Test outputTraceback (most recent call last): File "/workspace/dataproc/snippets/.nox/py-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py", line 72, in error_remapped_callable return callable_(*args, **kwargs) File "/workspace/dataproc/snippets/.nox/py-3-10/lib/python3.10/site-packages/grpc/_channel.py", line 1030, in __call__ return _end_unary_response_blocking(state, call, False, None) File "/workspace/dataproc/snippets/.nox/py-3-10/lib/python3.10/site-packages/grpc/_channel.py", line 910, in _end_unary_response_blocking raise _InactiveRpcError(state) # pytype: disable=not-instantiable grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.ALREADY_EXISTS details = "Already exists: Failed to create cluster: Cluster projects/python-docs-samples-tests/regions/us-central1/clusters/py-cc-test-6e496aa1-bed0-44a7-a1b0-f72b87650224" debug_error_string = "UNKNOWN:Error received from peer ipv4:209.85.146.95:443 {grpc_message:"Already exists: Failed to create cluster: Cluster projects/python-docs-samples-tests/regions/us-central1/clusters/py-cc-test-6e496aa1-bed0-44a7-a1b0-f72b87650224", grpc_status:6, created_time:"2023-04-30T13:06:58.306931859+00:00"}" > |
Oops! Looks like this issue is still flaky. It failed again. 😬 I reopened the issue, but a human will need to close it again. commit: 6286564 Test outputTraceback (most recent call last): File "/workspace/dataproc/snippets/.nox/py-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py", line 72, in error_remapped_callable return callable_(*args, **kwargs) File "/workspace/dataproc/snippets/.nox/py-3-10/lib/python3.10/site-packages/grpc/_channel.py", line 1030, in __call__ return _end_unary_response_blocking(state, call, False, None) File "/workspace/dataproc/snippets/.nox/py-3-10/lib/python3.10/site-packages/grpc/_channel.py", line 910, in _end_unary_response_blocking raise _InactiveRpcError(state) # pytype: disable=not-instantiable grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.INVALID_ARGUMENT details = "Cluster 'py-cc-test-b425dd62-cbb0-4499-963e-65ecd16b377d' must be running before it can be updated, current cluster state is 'ERROR'." debug_error_string = "UNKNOWN:Error received from peer ipv4:173.194.196.95:443 {grpc_message:"Cluster \'py-cc-test-b425dd62-cbb0-4499-963e-65ecd16b377d\' must be running before it can be updated, current cluster state is \'ERROR\'.", grpc_status:3, created_time:"2023-05-02T13:03:19.77002831+00:00"}" > |
Oops! Looks like this issue is still flaky. It failed again. 😬 I reopened the issue, but a human will need to close it again. commit: 43304bc Test outputTraceback (most recent call last): File "/workspace/dataproc/snippets/update_cluster_test.py", line 98, in test_update_cluster assert response.status.state == ClusterStatus.State.RUNNING AssertionError: assert == + where = state: ERROR\nstate_start_time {\n seconds: 1683576210\n nanos: 18671000\n}\n.state + where state: ERROR\nstate_start_time {\n seconds: 1683576210\n nanos: 18671000\n}\n = project_id: "python-docs-samples-tests"\ncluster_name: "py-cc-test-2cc16589-2aa6-4ec7-b82a-6584a7819fb7"\nconfig {\n config_bucket: "dataproc-28f7d075-c5ac-4938-a5ad-65eafd8032d3-us-central1"\n temp_bucket: "dataproc-temp-us-central1-1012616486416-3kbzlm1c"\n gce_cluster_config {\n zone_uri: "https://www.googleapis.com/compute/v1/projects/python-docs-samples-tests/zones/us-central1-f"\n network_uri: "https://www.googleapis.com/compute/v1/projects/python-docs-samples-tests/global/networks/default"\n service_account_scopes: "https://www.googleapis.com/auth/bigquery"\n service_account_scopes: "https://www.googleapis.com/auth/bigtable.admin.table"\n service_account_scopes: "https://www.googleapis.com/auth/bigtable.data"\n service_account_scopes: "https://www.googleapis.com/auth/cloud.useraccounts.readonly"\n service_account_scopes: "https://www.googleapis.com/auth/devstorage.full_control"\n service_account_scopes: "https://www.googleapis.com/auth/devstorage.read_write"\n service_account_scopes: "https://www.googleapis.com/auth/logging.write"\n }\n master_config {\n num_instances: 1\n instance_names: "py-cc-test-2cc16589-2aa6-4ec7-b82a-6584a7819fb7-m"\n image_uri... properties {\n key: "distcp:mapreduce.map.memory.mb"\n value: "768"\n }\n properties {\n key: "distcp:mapreduce.map.java.opts"\n value: "-Xmx576m"\n }\n properties {\n key: "core:hadoop.ssl.enabled.protocols"\n value: "TLSv1,TLSv1.1,TLSv1.2"\n }\n properties {\n key: "core:fs.gs.metadata.cache.enable"\n value: "false"\n }\n properties {\n key: "core:fs.gs.block.size"\n value: "134217728"\n }\n properties {\n key: "capacity-scheduler:yarn.scheduler.capacity.root.default.ordering-policy"\n value: "fair"\n }\n }\n endpoint_config {\n }\n}\nlabels {\n key: "goog-dataproc-location"\n value: "us-central1"\n}\nlabels {\n key: "goog-dataproc-cluster-uuid"\n value: "25dc4ca2-fdb6-4340-943d-2a427c996286"\n}\nlabels {\n key: "goog-dataproc-cluster-name"\n value: "py-cc-test-2cc16589-2aa6-4ec7-b82a-6584a7819fb7"\n}\nlabels {\n key: "goog-dataproc-autozone"\n value: "enabled"\n}\nstatus {\n state: ERROR\n state_start_time {\n seconds: 1683576210\n nanos: 18671000\n }\n}\nstatus_history {\n state: CREATING\n state_start_time {\n seconds: 1683576207\n nanos: 351596000\n }\n}\ncluster_uuid: "25dc4ca2-fdb6-4340-943d-2a427c996286"\n.status + and = .RUNNING + where = ClusterStatus.State |
WAI, failed on cluster in ERROR state |
Oops! Looks like this issue is still flaky. It failed again. 😬 I reopened the issue, but a human will need to close it again. commit: 0b7616f Test outputTraceback (most recent call last): File "/workspace/dataproc/snippets/update_cluster_test.py", line 98, in test_update_cluster assert response.status.state == ClusterStatus.State.RUNNING AssertionError: assert == + where = state: ERROR\nstate_start_time {\n seconds: 1683801581\n nanos: 931735000\n}\n.state + where state: ERROR\nstate_start_time {\n seconds: 1683801581\n nanos: 931735000\n}\n = project_id: "python-docs-samples-tests"\ncluster_name: "py-cc-test-a77243ee-9536-4ae6-93cf-8042d6d69a0c"\nconfig {\n config_bucket: "dataproc-28f7d075-c5ac-4938-a5ad-65eafd8032d3-us-central1"\n temp_bucket: "dataproc-temp-us-central1-1012616486416-3kbzlm1c"\n gce_cluster_config {\n zone_uri: "https://www.googleapis.com/compute/v1/projects/python-docs-samples-tests/zones/us-central1-f"\n network_uri: "https://www.googleapis.com/compute/v1/projects/python-docs-samples-tests/global/networks/default"\n service_account_scopes: "https://www.googleapis.com/auth/bigquery"\n service_account_scopes: "https://www.googleapis.com/auth/bigtable.admin.table"\n service_account_scopes: "https://www.googleapis.com/auth/bigtable.data"\n service_account_scopes: "https://www.googleapis.com/auth/cloud.useraccounts.readonly"\n service_account_scopes: "https://www.googleapis.com/auth/devstorage.full_control"\n service_account_scopes: "https://www.googleapis.com/auth/devstorage.read_write"\n service_account_scopes: "https://www.googleapis.com/auth/logging.write"\n }\n master_config {\n num_instances: 1\n instance_names: "py-cc-test-a77243ee-9536-4ae6-93cf-8042d6d69a0c-m"\n image_uri... properties {\n key: "distcp:mapreduce.map.memory.mb"\n value: "768"\n }\n properties {\n key: "distcp:mapreduce.map.java.opts"\n value: "-Xmx576m"\n }\n properties {\n key: "core:hadoop.ssl.enabled.protocols"\n value: "TLSv1,TLSv1.1,TLSv1.2"\n }\n properties {\n key: "core:fs.gs.metadata.cache.enable"\n value: "false"\n }\n properties {\n key: "core:fs.gs.block.size"\n value: "134217728"\n }\n properties {\n key: "capacity-scheduler:yarn.scheduler.capacity.root.default.ordering-policy"\n value: "fair"\n }\n }\n endpoint_config {\n }\n}\nlabels {\n key: "goog-dataproc-location"\n value: "us-central1"\n}\nlabels {\n key: "goog-dataproc-cluster-uuid"\n value: "6393dbd8-6c99-47e9-af75-9cef70116b42"\n}\nlabels {\n key: "goog-dataproc-cluster-name"\n value: "py-cc-test-a77243ee-9536-4ae6-93cf-8042d6d69a0c"\n}\nlabels {\n key: "goog-dataproc-autozone"\n value: "enabled"\n}\nstatus {\n state: ERROR\n state_start_time {\n seconds: 1683801581\n nanos: 931735000\n }\n}\nstatus_history {\n state: CREATING\n state_start_time {\n seconds: 1683801579\n nanos: 24856000\n }\n}\ncluster_uuid: "6393dbd8-6c99-47e9-af75-9cef70116b42"\n.status + and = .RUNNING + where = ClusterStatus.State |
bumping to p2 because it's not a service issue |
Oops! Looks like this issue is still flaky. It failed again. 😬 I reopened the issue, but a human will need to close it again. commit: 360972a Test outputTraceback (most recent call last): File "/workspace/dataproc/snippets/update_cluster_test.py", line 98, in test_update_cluster assert response.status.state == ClusterStatus.State.RUNNING AssertionError: assert == + where = state: ERROR\nstate_start_time {\n seconds: 1685537022\n nanos: 392280000\n}\n.state + where state: ERROR\nstate_start_time {\n seconds: 1685537022\n nanos: 392280000\n}\n = project_id: "python-docs-samples-tests"\ncluster_name: "py-cc-test-67685216-d8b4-44d9-90a2-d3e78e6f98a3"\nconfig {\n config_bucket: "dataproc-28f7d075-c5ac-4938-a5ad-65eafd8032d3-us-central1"\n temp_bucket: "dataproc-temp-us-central1-1012616486416-3kbzlm1c"\n gce_cluster_config {\n zone_uri: "https://www.googleapis.com/compute/v1/projects/python-docs-samples-tests/zones/us-central1-f"\n network_uri: "https://www.googleapis.com/compute/v1/projects/python-docs-samples-tests/global/networks/default"\n service_account_scopes: "https://www.googleapis.com/auth/bigquery"\n service_account_scopes: "https://www.googleapis.com/auth/bigtable.admin.table"\n service_account_scopes: "https://www.googleapis.com/auth/bigtable.data"\n service_account_scopes: "https://www.googleapis.com/auth/cloud.useraccounts.readonly"\n service_account_scopes: "https://www.googleapis.com/auth/devstorage.full_control"\n service_account_scopes: "https://www.googleapis.com/auth/devstorage.read_write"\n service_account_scopes: "https://www.googleapis.com/auth/logging.write"\n }\n master_config {\n num_instances: 1\n instance_names: "py-cc-test-67685216-d8b4-44d9-90a2-d3e78e6f98a3-m"\n image_uri... properties {\n key: "distcp:mapreduce.map.memory.mb"\n value: "768"\n }\n properties {\n key: "distcp:mapreduce.map.java.opts"\n value: "-Xmx576m"\n }\n properties {\n key: "core:hadoop.ssl.enabled.protocols"\n value: "TLSv1,TLSv1.1,TLSv1.2"\n }\n properties {\n key: "core:fs.gs.metadata.cache.enable"\n value: "false"\n }\n properties {\n key: "core:fs.gs.block.size"\n value: "134217728"\n }\n properties {\n key: "capacity-scheduler:yarn.scheduler.capacity.root.default.ordering-policy"\n value: "fair"\n }\n }\n endpoint_config {\n }\n}\nlabels {\n key: "goog-dataproc-location"\n value: "us-central1"\n}\nlabels {\n key: "goog-dataproc-cluster-uuid"\n value: "e3de8c40-dbf7-4cd2-b787-ec774181f8f7"\n}\nlabels {\n key: "goog-dataproc-cluster-name"\n value: "py-cc-test-67685216-d8b4-44d9-90a2-d3e78e6f98a3"\n}\nlabels {\n key: "goog-dataproc-autozone"\n value: "enabled"\n}\nstatus {\n state: ERROR\n state_start_time {\n seconds: 1685537022\n nanos: 392280000\n }\n}\nstatus_history {\n state: CREATING\n state_start_time {\n seconds: 1685537020\n nanos: 77109000\n }\n}\ncluster_uuid: "e3de8c40-dbf7-4cd2-b787-ec774181f8f7"\n.status + and = .RUNNING + where = ClusterStatus.State |
Closing to see if #10527 helped |
Oops! Looks like this issue is still flaky. It failed again. 😬 I reopened the issue, but a human will need to close it again. commit: e440097 Test outputTraceback (most recent call last): File "/workspace/dataproc/snippets/update_cluster_test.py", line 107, in test_update_cluster assert response.status.state == ClusterStatus.State.RUNNING AssertionError: assert == + where = state: ERROR\nstate_start_time {\n seconds: 1692968912\n nanos: 827217000\n}\n.state + where state: ERROR\nstate_start_time {\n seconds: 1692968912\n nanos: 827217000\n}\n = project_id: "python-docs-samples-tests"\ncluster_name: "py-cc-test-ed5888c3-0ebd-4a53-bdd7-2edc7e05a7df"\nconfig {\n config_bucket: "dataproc-28f7d075-c5ac-4938-a5ad-65eafd8032d3-us-central1"\n temp_bucket: "dataproc-temp-us-central1-1012616486416-3kbzlm1c"\n gce_cluster_config {\n zone_uri: "https://www.googleapis.com/compute/v1/projects/python-docs-samples-tests/zones/us-central1-f"\n network_uri: "https://www.googleapis.com/compute/v1/projects/python-docs-samples-tests/global/networks/default"\n service_account_scopes: "https://www.googleapis.com/auth/bigquery"\n service_account_scopes: "https://www.googleapis.com/auth/bigtable.admin.table"\n service_account_scopes: "https://www.googleapis.com/auth/bigtable.data"\n service_account_scopes: "https://www.googleapis.com/auth/cloud.useraccounts.readonly"\n service_account_scopes: "https://www.googleapis.com/auth/devstorage.full_control"\n service_account_scopes: "https://www.googleapis.com/auth/devstorage.read_write"\n service_account_scopes: "https://www.googleapis.com/auth/logging.write"\n service_account_scopes: "https://www.googleapis.com/auth/monitoring.write"\n }\n master_config {\n num_instances: 1\n in... properties {\n key: "distcp:mapreduce.map.memory.mb"\n value: "768"\n }\n properties {\n key: "distcp:mapreduce.map.java.opts"\n value: "-Xmx576m"\n }\n properties {\n key: "core:hadoop.ssl.enabled.protocols"\n value: "TLSv1,TLSv1.1,TLSv1.2"\n }\n properties {\n key: "core:fs.gs.metadata.cache.enable"\n value: "false"\n }\n properties {\n key: "core:fs.gs.block.size"\n value: "134217728"\n }\n properties {\n key: "capacity-scheduler:yarn.scheduler.capacity.root.default.ordering-policy"\n value: "fair"\n }\n }\n endpoint_config {\n }\n}\nlabels {\n key: "goog-dataproc-location"\n value: "us-central1"\n}\nlabels {\n key: "goog-dataproc-cluster-uuid"\n value: "a582e6a6-e167-43fe-bcdd-fa8d9812fb6e"\n}\nlabels {\n key: "goog-dataproc-cluster-name"\n value: "py-cc-test-ed5888c3-0ebd-4a53-bdd7-2edc7e05a7df"\n}\nlabels {\n key: "goog-dataproc-autozone"\n value: "enabled"\n}\nstatus {\n state: ERROR\n state_start_time {\n seconds: 1692968912\n nanos: 827217000\n }\n}\nstatus_history {\n state: CREATING\n state_start_time {\n seconds: 1692968910\n nanos: 74745000\n }\n}\ncluster_uuid: "a582e6a6-e167-43fe-bcdd-fa8d9812fb6e"\n.status + and = .RUNNING + where = ClusterStatus.State |
Oops! Looks like this issue is still flaky. It failed again. 😬 I reopened the issue, but a human will need to close it again. commit: 3b1bc75 Test outputTraceback (most recent call last): File "/workspace/dataproc/snippets/update_cluster_test.py", line 107, in test_update_cluster assert response.status.state == ClusterStatus.State.RUNNING AssertionError: assert == + where = state: ERROR\nstate_start_time {\n seconds: 1697289314\n nanos: 908115000\n}\n.state + where state: ERROR\nstate_start_time {\n seconds: 1697289314\n nanos: 908115000\n}\n = project_id: "python-docs-samples-tests"\ncluster_name: "py-cc-test-a8b73c46-3ba3-4bb5-ae6d-602006b361e9"\nconfig {\n config_bucket: "dataproc-28f7d075-c5ac-4938-a5ad-65eafd8032d3-us-central1"\n temp_bucket: "dataproc-temp-us-central1-1012616486416-3kbzlm1c"\n gce_cluster_config {\n zone_uri: "https://www.googleapis.com/compute/v1/projects/python-docs-samples-tests/zones/us-central1-c"\n network_uri: "https://www.googleapis.com/compute/v1/projects/python-docs-samples-tests/global/networks/default"\n service_account_scopes: "https://www.googleapis.com/auth/bigquery"\n service_account_scopes: "https://www.googleapis.com/auth/bigtable.admin.table"\n service_account_scopes: "https://www.googleapis.com/auth/bigtable.data"\n service_account_scopes: "https://www.googleapis.com/auth/cloud.useraccounts.readonly"\n service_account_scopes: "https://www.googleapis.com/auth/devstorage.full_control"\n service_account_scopes: "https://www.googleapis.com/auth/devstorage.read_write"\n service_account_scopes: "https://www.googleapis.com/auth/logging.write"\n service_account_scopes: "https://www.googleapis.com/auth/monitoring.write"\n }\n master_config {\n num_instances: 1\n in... properties {\n key: "distcp:mapreduce.map.memory.mb"\n value: "768"\n }\n properties {\n key: "distcp:mapreduce.map.java.opts"\n value: "-Xmx576m"\n }\n properties {\n key: "core:hadoop.ssl.enabled.protocols"\n value: "TLSv1,TLSv1.1,TLSv1.2"\n }\n properties {\n key: "core:fs.gs.metadata.cache.enable"\n value: "false"\n }\n properties {\n key: "core:fs.gs.block.size"\n value: "134217728"\n }\n properties {\n key: "capacity-scheduler:yarn.scheduler.capacity.root.default.ordering-policy"\n value: "fair"\n }\n }\n endpoint_config {\n }\n}\nlabels {\n key: "goog-dataproc-location"\n value: "us-central1"\n}\nlabels {\n key: "goog-dataproc-cluster-uuid"\n value: "4e43e0f8-43ed-4489-884a-56ccc97ece40"\n}\nlabels {\n key: "goog-dataproc-cluster-name"\n value: "py-cc-test-a8b73c46-3ba3-4bb5-ae6d-602006b361e9"\n}\nlabels {\n key: "goog-dataproc-autozone"\n value: "enabled"\n}\nstatus {\n state: ERROR\n state_start_time {\n seconds: 1697289314\n nanos: 908115000\n }\n}\nstatus_history {\n state: CREATING\n state_start_time {\n seconds: 1697289311\n nanos: 264192000\n }\n}\ncluster_uuid: "4e43e0f8-43ed-4489-884a-56ccc97ece40"\n.status + and = .RUNNING + where = ClusterStatus.State |
Note: #8894 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
commit: a0b5ecf
buildURL: Build Status, Sponge
status: failed
Test output
The text was updated successfully, but these errors were encountered: