-
Notifications
You must be signed in to change notification settings - Fork 29
Reject bucket responses that have unexpected byte count #8776
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughRefactors test helper createBucketResponseFunction to accept per-layer element-class descriptors and updates tests accordingly. Adds strict bucket-response size validation and throws on mismatch, introduces per-bucket failure tracking and partial-batch retrying in pull queue, and adds a changelog entry documenting the bucket-reject behavior. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. 📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 💡 Knowledge Base configuration:
You can enable these sources in your CodeRabbit configuration. 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
…to get the dataset somehow)
@MichaelBuessemeyer can you have a look if you have some air? thank you! |
// if (Math.random() < 0.5) { | ||
// buffer = buffer.subarray(1, buffer.length - 1); | ||
// } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uncomment this for testing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will remove it before merging
@normanrz please re-assign in case you want someone, else to review this pr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
frontend/javascripts/test/model/binary/layers/wkstore_adapter.spec.ts (2)
215-233
: Color four-bit test: token expectation mismatchSame as above: no refresh is configured here, so token should be "token".
Apply this diff:
- const expectedUrl = "url/data/datasets/datasetId/layers/color/data?token=token2"; + const expectedUrl = "url/data/datasets/datasetId/layers/color/data?token=token";Also consider reducing brittleness like in the previous comment:
- expect(RequestMock.sendJSONReceiveArraybufferWithHeaders).toHaveBeenCalledWith( - expectedUrl, - expectedOptions, - ); + expect(RequestMock.sendJSONReceiveArraybufferWithHeaders).toHaveBeenCalledWith( + expectedUrl, + expect.objectContaining(expectedOptions), + );
239-252
: Segmentation test: fix token expectation and reduce options brittlenessMirror the same fixes here.
Apply this diff:
- const expectedUrl = "url/data/datasets/datasetId/layers/segmentation/data?token=token2"; + const expectedUrl = "url/data/datasets/datasetId/layers/segmentation/data?token=token";And make the options check future-proof:
- expect(RequestMock.sendJSONReceiveArraybufferWithHeaders).toHaveBeenCalledWith( - expectedUrl, - expectedOptions, - ); + expect(RequestMock.sendJSONReceiveArraybufferWithHeaders).toHaveBeenCalledWith( + expectedUrl, + expect.objectContaining(expectedOptions), + );frontend/javascripts/viewer/model/bucket_data_handling/bucket.ts (1)
617-633
: Avoid serializing the entire buffer into the error; log only minimal, structured metadataJSON-stringifying a (typed) array can explode logs and degrade performance. Also, it currently emits full debug info in production but omits it in tests, which is counterintuitive. Recommend logging only actual/expected lengths, channelCount, and the address, and pass structured metadata to ErrorHandling instead of embedding it in the message.
- if (data.length !== channelCount * Constants.BUCKET_SIZE) { - const debugInfo = // Disable this conditional if you need verbose output here. - process.env.IS_TESTING - ? " (<omitted>)" - : { - arrayBuffer, - actual: data.length, - expected: channelCount * Constants.BUCKET_SIZE, - channelCount, - }; - console.warn(`bucket.data for ${this.zoomedAddress} has unexpected length`, debugInfo); - const error = new Error( - `bucket.data has unexpected length. Details: ${JSON.stringify(debugInfo)}`, - ); - ErrorHandling.notify(error); - throw error; - } + if (data.length !== channelCount * Constants.BUCKET_SIZE) { + const actual = data.length; + const expected = channelCount * Constants.BUCKET_SIZE; + const meta = { + address: this.zoomedAddress.slice(0, 4), + actual, + expected, + channelCount, + elementClass: this.elementClass, + }; + console.warn(`bucket.data has unexpected length for ${meta.address} (actual=${actual}, expected=${expected})`, meta); + const error = new Error("bucket.data has unexpected length"); + ErrorHandling.notify(error, meta); + throw error; + }frontend/javascripts/viewer/model/bucket_data_handling/pullqueue.ts (1)
121-131
: Failed buckets are not re-queued unless dirty — this breaks “auto-reload” for read-only bucketsIf a bucket fails (e.g., unexpected byte length), non-dirty buckets are marked UNREQUESTED but never re-enqueued here. Unless some other subsystem re-adds them, they won’t be retried, contradicting the PR’s goal to auto-reload. Re-queue all failed data buckets with highest priority.
- failedBucketAddresses = failedBucketAddresses.length === 0 ? batch : failedBucketAddresses; - for (const bucketAddress of failedBucketAddresses) { + failedBucketAddresses = failedBucketAddresses.length === 0 ? batch : failedBucketAddresses; + for (const bucketAddress of failedBucketAddresses) { const bucket = this.cube.getBucket(bucketAddress); if (bucket.type === "data") { bucket.markAsFailed(false); - - if (bucket.dirty) { - bucket.addToPullQueueWithHighestPriority(); - } + // Always re-queue to honor the "auto-reload" promise. + bucket.addToPullQueueWithHighestPriority(); } }Follow-up: if you’re concerned about hot loops under persistent failure, the existing backoff (
getRetryDelay
) andisRetryScheduled
guards already mitigate that. You could additionally cap retries per bucket via a transient counter on the bucket, but that’s optional.frontend/javascripts/test/helpers/apiHelpers.ts (1)
253-263
: Bug: overrides ignore channelCount and mis-index across buckets
indexInBucket
computes a per-voxel index but:
- Does not multiply the bucket stride by
channelCount
(breaks for multi-channel types like "uint24"),- Writes only the first channel for multi-channel data.
At minimum, fix the stride so overrides target the correct bucket region. Optionally, set all channels if overrides are intended to be per-voxel.
- for (let bucketIdx = 0; bucketIdx < bucketCount; bucketIdx++) { - for (const { position, value } of overrides) { - const [x, y, z] = position; - const indexInBucket = - bucketIdx * Constants.BUCKET_WIDTH ** 3 + - z * Constants.BUCKET_WIDTH ** 2 + - y * Constants.BUCKET_WIDTH + - x; - typedArray[indexInBucket] = value; - } - } + const voxelsPerBucket = Constants.BUCKET_WIDTH ** 3; + for (let bucketIdx = 0; bucketIdx < bucketCount; bucketIdx++) { + const bucketBase = bucketIdx * channelCount * voxelsPerBucket; + for (const { position, value } of overrides) { + const [x, y, z] = position; + const voxelIndex = + z * Constants.BUCKET_WIDTH ** 2 + y * Constants.BUCKET_WIDTH + x; + const elementIndex = bucketBase + voxelIndex; + // Minimal fix: affect first channel + typedArray[elementIndex] = value; + // Optional: uncomment to assign all channels + // for (let c = 0; c < channelCount; c++) { + // typedArray[bucketBase + voxelIndex + c * voxelsPerBucket] = value; + // } + } + }
♻️ Duplicate comments (1)
frontend/javascripts/viewer/model/bucket_data_handling/wkstore_adapter.ts (1)
274-276
: Remove or gate the fuzzing snippet behind a build-time flagThese commented lines are a handy test hook but easy to forget and confusing in production code. Either remove them or gate via an env flag to allow opt-in local testing.
Option A — remove:
- // if (Math.random() < 0.5) { - // buffer = buffer.subarray(1, buffer.length - 1); - // }Option B — gate behind an env flag:
+ if (process.env.WK_FUZZ_BUCKET_BYTES === "1") { + buffer = buffer.subarray(1, buffer.length - 1); + }
🧹 Nitpick comments (21)
frontend/javascripts/test/model/binary/layers/wkstore_adapter.spec.ts (8)
72-75
: Tighten TestContext typing for clarityOptional: Narrow the types to APIColorLayer and APISegmentationLayer to catch mismatches at compile time and make intent explicit.
Example:
// additionally import the concrete types: // import type { APIColorLayer, APISegmentationLayer } from "types/api_types"; interface TestContext { colorLayer: APIColorLayer; segmentationLayer: APISegmentationLayer; }
83-85
: Avoid relying on fixture layer ordering; select by category/name insteadIndex-based selection is brittle if the fixture changes order. Prefer a guarded lookup.
Apply this diff:
- context.colorLayer = dataSource.dataLayers[0]; - context.segmentationLayer = dataSource.dataLayers[1]; + const color = dataSource.dataLayers.find((l) => l.category === "color"); + const seg = dataSource.dataLayers.find((l) => l.category === "segmentation"); + if (!color || !seg) { + throw new Error("Expected color and segmentation layers in fixture"); + } + context.colorLayer = color; + context.segmentationLayer = seg;
93-103
: Ensure byteCount is an integer to avoid range length off-by-one in four-bit modeWhen fourBitFactor is 0.5, the product depends on BUCKET_SIZE being even. Make this explicit and future-proof by truncating to an integer.
Apply this diff:
- const fourBitFactor = _fourBit && layer.category === "color" ? 0.5 : 1; - const byteCount = fourBitFactor * getByteCountFromLayer(layer) * Constants.BUCKET_SIZE; + const fourBitFactor = _fourBit && layer.category === "color" ? 0.5 : 1; + const voxelBytes = getByteCountFromLayer(layer); + const byteCount = Math.trunc(fourBitFactor * voxelBytes * Constants.BUCKET_SIZE);Optional micro-optimization (not required): build bucketData1/2 as Uint8Array directly and assemble responseBuffer via set(...) to avoid large intermediate arrays. I can provide a focused patch if you’d like.
208-211
: Reduce brittleness: allow extra option fields without breaking the testIf request options gain new non-essential keys, this strict equality will fail. Using objectContaining preserves intent while keeping future changes flexible.
Apply this diff:
- expect(RequestMock.sendJSONReceiveArraybufferWithHeaders).toHaveBeenCalledWith( - expectedUrl, - expectedOptions, - ); + expect(RequestMock.sendJSONReceiveArraybufferWithHeaders).toHaveBeenCalledWith( + expectedUrl, + expect.objectContaining(expectedOptions), + );
125-172
: Optional: strengthen token handling assertionsYou can assert the retry sequence more explicitly to document behavior: 403 triggers a single token refresh and exactly two send attempts.
Example additions:
expect(RequestMock.receiveJSON).toHaveBeenCalledTimes(2); expect(RequestMock.sendJSONReceiveArraybufferWithHeaders).toHaveBeenCalledTimes(2);
254-256
: Performance: full BUCKET_SIZE arrays increase test runtimeAllocating/compressing full-bucket data is realistic but can slow tests. Consider stubbing the worker or computing base64 once and reusing it.
Example (compute once):
const data = new Uint8Array(Constants.BUCKET_SIZE); // ... const encoded = byteArraysToLz4Base64([data])[0]; // reuse `encoded` below in both itemsOr mock the worker to return a deterministic placeholder during tests. I can send a patch if you prefer that route.
282-306
: DRY: avoid recomputing the same base64 twiceCompute once and reuse to shave time and make intent clearer.
Example patch (outside this exact range due to variable placement):
const encoded = byteArraysToLz4Base64([data])[0]; ... base64Data: encoded, ... base64Data: encoded,
93-123
: Add negative-path coverage for “reject unexpected bucket sizes”Given the PR objective, consider a test where the response buffer is too short/long to ensure requestWithFallback rejects/marks buckets missing as intended. I can provide a ready-to-merge test.
Example (new test to add):
it<TestContext>("requestWithFallback: rejects responses with unexpected byte count", async ({ colorLayer }) => { const { batch } = ((): { batch: BucketAddress[] } => { const b: BucketAddress[] = [ [0, 0, 0, 0], [1, 1, 1, 1], ]; return { batch: b }; })(); // Build an undersized buffer (one bucket instead of two) const voxelBytes = getByteCountFromLayer(colorLayer); const bytesPerBucket = Math.trunc(voxelBytes * Constants.BUCKET_SIZE); const undersizedBuffer = new Uint8Array(bytesPerBucket); // should be 2 * bytesPerBucket vi.mocked(Request) .sendJSONReceiveArraybufferWithHeaders.mockReset() .mockResolvedValue({ buffer: undersizedBuffer.buffer, headers: { "missing-buckets": "[]" }, }); await expect(requestWithFallback(colorLayer, batch)).rejects.toThrow(/unexpected/i); });unreleased_changes/8776.md (1)
2-2
: Clarify wording and use “byte length” to match implementation“Unexpected size” is a bit vague. Suggest making it explicit that the byte length is validated and that affected buckets are retried automatically.
-- Reject bucket request responses with unexpected size and try to reload those buckets automatically. +- Reject bucket request responses with an unexpected byte length and automatically re-request the affected buckets.frontend/javascripts/viewer/model/bucket_data_handling/wkstore_adapter.ts (2)
280-284
: Improve error context; avoid leaking large buffersThe thrown error is good, but include layer name and counts to simplify triage and keep payload small. Do not embed or stringify the buffer.
- if (expectedTotalByteLength !== buffer.length) { - throw new Error( - `Expected ${expectedTotalByteLength} bytes, but received ${buffer.length}. Rejecting buckets.`, - ); - } + if (expectedTotalByteLength !== buffer.length) { + const msg = `Invalid bucket payload size for layer "${layerInfo.name}": expected ${expectedTotalByteLength} bytes (${availableBucketCount} buckets × ${BUCKET_BYTE_LENGTH}), received ${buffer.length}. Missing indices: [${missingBuckets.join(", ")}].`; + throw new Error(msg); + }
268-292
: Type hygiene: narrow return element type and validate indices (optional)
- Consider changing the return type to
Array<Uint8Array | null>
to match actual values;Uint8Array<ArrayBuffer>
is not a valid TS generic and can confuse tooling.- Optionally assert that all values in
missingBuckets
are within[0, batch.length)
.frontend/javascripts/viewer/model/bucket_data_handling/pullqueue.ts (1)
87-117
: Per-bucket error isolation is good; also typefailedBucketAddresses
explicitlyCatching per-bucket failures is the right move. Minor: declare the array as
BucketAddress[]
for clarity and type-safety.- let hasErrored = false; - let failedBucketAddresses = []; + let hasErrored = false; + let failedBucketAddresses: BucketAddress[] = [];frontend/javascripts/test/sagas/volumetracing/bucket_eviction_without_saving.spec.ts (1)
15-28
: Consider adding a regression test for “unexpected byte length” auto-retryTo lock in the new behavior, add a spec that first returns a truncated buffer for one bucket, then a correct buffer, and asserts the bucket eventually loads without crashing. I can draft this if helpful.
frontend/javascripts/test/sagas/volumetracing/volumetracing_remote_bucket_updates.spec.ts (1)
35-35
: DRY up the dtype map used in both mocksYou repeat the same per-layer map twice. Consider extracting a local
const dtypeMap = { volumeTracingId: "uint16", color: "uint8" }
and using it in bothmockImplementation
calls to reduce duplication and avoid drift in future edits.@@ - vi.mocked(mocks.Request).sendJSONReceiveArraybufferWithHeaders.mockImplementation( - createBucketResponseFunction({ volumeTracingId: "uint16", color: "uint8" }, oldCellId, 5), - ); + const dtypeMap = { volumeTracingId: "uint16", color: "uint8" } as const; + vi.mocked(mocks.Request).sendJSONReceiveArraybufferWithHeaders.mockImplementation( + createBucketResponseFunction(dtypeMap, oldCellId, 5), + ); @@ - vi.mocked(mocks.Request).sendJSONReceiveArraybufferWithHeaders.mockImplementation( - createBucketResponseFunction({ volumeTracingId: "uint16", color: "uint8" }, newCellId, 5), - ); + vi.mocked(mocks.Request).sendJSONReceiveArraybufferWithHeaders.mockImplementation( + createBucketResponseFunction(dtypeMap, newCellId, 5), + );Also applies to: 54-54
frontend/javascripts/test/sagas/volumetracing/bucket_eviction_with_saving.spec.ts (1)
21-21
: Include volume tracing layer dtype to harden the initial mockIf any bucket requests for the volume tracing layer occur before
testLabelingManyBuckets
installs its own mock, the current map (missingvolumeTracingId
) would trigger “Layer Type … not provided.” Safer to include it here as well.- createBucketResponseFunction({ color: "uint8", segmentation: "uint16" }, 0, 0), + createBucketResponseFunction({ volumeTracingId: "uint16", color: "uint8", segmentation: "uint16" }, 0, 0),frontend/javascripts/test/api/api_volume_latest.spec.ts (1)
56-56
: Nice: deterministic bucket mock via per-layer dtype map; consider adding a negative-size testThe mock setup is correct and uses the new descriptor style properly. Given the PR’s goal (reject unexpected byte counts), consider adding a test that intentionally returns an undersized/oversized buffer and asserts that the client rejects it and recovers gracefully.
Here’s a minimal pattern you could adapt in this spec (or a new one) to simulate a truncated buffer:
@@ it<WebknossosTestContext>("Data API: labelVoxels should label a list of voxels", async ({ api, mocks, }) => { vi.mocked(mocks.Request.sendJSONReceiveArraybufferWithHeaders).mockImplementation( createBucketResponseFunction({ volumeTracingId: "uint16", color: "uint8" }, 0, 0), ); @@ }); + + it<WebknossosTestContext>("Data API: rejects bucket responses with unexpected byte length", async ({ api, mocks }) => { + // Wrap the normal helper but tamper with the returned buffer length. + vi.mocked(mocks.Request.sendJSONReceiveArraybufferWithHeaders).mockImplementation( + async (url: string, payload: any) => { + const ok = await createBucketResponseFunction({ volumeTracingId: "uint16", color: "uint8" }, 0, 0)(url, payload); + // Return a buffer missing the last byte. + const full = new Uint8Array(ok.buffer); + const truncated = full.slice(0, Math.max(0, full.byteLength - 1)).buffer; + return { buffer: truncated, headers: ok.headers }; + }, + ); + const volumeTracingId = api.data.getVolumeTracingLayerIds()[0]; + // The actual assertion here should reflect the client behavior introduced in this PR, + // e.g., throwing, logging, and scheduling a retry. Adjust as needed: + await expect(api.data.reloadAllBuckets()).rejects.toBeTruthy(); + // Optionally, verify the system remains responsive: + await expect(api.data.getDataValue(volumeTracingId, [0, 0, 0])).resolves.toBeDefined(); + });Also applies to: 58-60
frontend/javascripts/test/sagas/volumetracing/volumetracing_saga_integration_1.spec.ts (1)
88-93
: Repeated descriptor usage is correct; consider DRY-ing the constantThe updated multi-line usage is clear. Since the same descriptor reappears across this file, consider factoring it into a local constant (e.g., LAYER_DTYPES) to reduce repetition and ease future edits.
+ const LAYER_DTYPES = { volumeTracingId: "uint16", color: "uint8" } as const; ... - createBucketResponseFunction( - { volumeTracingId: "uint16", color: "uint8" }, - newCellId + 1, - 500, - ), + createBucketResponseFunction(LAYER_DTYPES, newCellId + 1, 500),frontend/javascripts/test/sagas/volumetracing/volumetracing_saga_integration_2.spec.ts (1)
139-140
: Consider a shared constant for the dtype mappingSame descriptor appears multiple times throughout this spec. Extracting a constant at the top keeps things consistent and reduces maintenance overhead.
+ const LAYER_DTYPES = { volumeTracingId: "uint16", color: "uint8" } as const; ... - createBucketResponseFunction({ volumeTracingId: "uint16", color: "uint8" }, 0) + createBucketResponseFunction(LAYER_DTYPES, 0)Also applies to: 226-227, 302-303, 377-378, 449-450, 509-510, 591-592
frontend/javascripts/test/helpers/apiHelpers.ts (3)
232-241
: Layer-name parsing is brittle; resolve by scanning for 'layers' segmentIndex-based extraction
urlPathParts[4]
vsurlPathParts[2]
will break if the path layout changes (e.g., tracingstore vs datastore routes). Make this robust by finding the 'layers' segment and taking the next token. Fallback to last token if not found.- const requestedURL = new URL(_url); - // Removing first empty part as the pathname always starts with a /. - const urlPathParts = requestedURL.pathname.split("/").slice(1); - const requestedLayerName = urlPathParts[0] === "data" ? urlPathParts[4] : urlPathParts[2]; + const requestedURL = new URL(_url); + const urlPathParts = requestedURL.pathname.split("/").filter(Boolean); + const layersIdx = urlPathParts.findIndex((p) => p === "layers"); + const requestedLayerName = + layersIdx >= 0 && layersIdx + 1 < urlPathParts.length + ? urlPathParts[layersIdx + 1] + : urlPathParts.at(-1)!; // fallback: last segment
245-251
: Typed array allocation/fill: OK; minor simplification possibleLogic correctly scales with element size and channel count, and handles BigInt arrays. Minor: returning
typedArray.buffer
directly below avoids the redundantnew Uint8Array(...)
wrapper.- return { - buffer: new Uint8Array(typedArray.buffer).buffer, + return { + buffer: typedArray.buffer, headers: { "missing-buckets": "[]", }, };
232-241
: Optional: add negative-test support to simulate malformed bucket byte lengthsGiven the PR’s goal (reject unexpected byte counts), consider an overload/helper that intentionally skews the returned byte length (±N bytes) to make tests explicit and deterministic instead of relying on commented toggles elsewhere.
I can draft a
createSkewedBucketResponseFunction(dataTypePerLayer, fill, { byteSkew })
that wraps this implementation and slices/extends the ArrayBuffer accordingly. Want me to propose it?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (14)
frontend/javascripts/test/api/api_volume_latest.spec.ts
(2 hunks)frontend/javascripts/test/helpers/apiHelpers.ts
(3 hunks)frontend/javascripts/test/model/binary/layers/wkstore_adapter.spec.ts
(10 hunks)frontend/javascripts/test/sagas/proofreading.spec.ts
(1 hunks)frontend/javascripts/test/sagas/volumetracing/bucket_eviction_helper.ts
(1 hunks)frontend/javascripts/test/sagas/volumetracing/bucket_eviction_with_saving.spec.ts
(1 hunks)frontend/javascripts/test/sagas/volumetracing/bucket_eviction_without_saving.spec.ts
(1 hunks)frontend/javascripts/test/sagas/volumetracing/volumetracing_remote_bucket_updates.spec.ts
(2 hunks)frontend/javascripts/test/sagas/volumetracing/volumetracing_saga_integration_1.spec.ts
(6 hunks)frontend/javascripts/test/sagas/volumetracing/volumetracing_saga_integration_2.spec.ts
(8 hunks)frontend/javascripts/viewer/model/bucket_data_handling/bucket.ts
(1 hunks)frontend/javascripts/viewer/model/bucket_data_handling/pullqueue.ts
(2 hunks)frontend/javascripts/viewer/model/bucket_data_handling/wkstore_adapter.ts
(1 hunks)unreleased_changes/8776.md
(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2024-11-22T17:18:04.217Z
Learnt from: dieknolle3333
PR: scalableminds/webknossos#8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In `frontend/javascripts/oxalis/model/sagas/proofread_saga.ts`, when calling `getMagInfo`, the use of `volumeTracingLayer.resolutions` is intentional and should not be changed to `volumeTracingLayer.mags`.
Applied to files:
frontend/javascripts/test/sagas/volumetracing/volumetracing_saga_integration_1.spec.ts
🧬 Code graph analysis (11)
frontend/javascripts/test/api/api_volume_latest.spec.ts (1)
frontend/javascripts/test/helpers/apiHelpers.ts (1)
createBucketResponseFunction
(224-272)
frontend/javascripts/test/sagas/proofreading.spec.ts (1)
frontend/javascripts/test/helpers/apiHelpers.ts (1)
createBucketResponseFunction
(224-272)
frontend/javascripts/test/sagas/volumetracing/bucket_eviction_with_saving.spec.ts (1)
frontend/javascripts/test/helpers/apiHelpers.ts (1)
createBucketResponseFunction
(224-272)
frontend/javascripts/viewer/model/bucket_data_handling/wkstore_adapter.ts (1)
frontend/javascripts/libs/DRACOWorker.worker.js (1)
buffer
(28-28)
frontend/javascripts/test/sagas/volumetracing/bucket_eviction_helper.ts (1)
frontend/javascripts/test/helpers/apiHelpers.ts (1)
createBucketResponseFunction
(224-272)
frontend/javascripts/test/sagas/volumetracing/volumetracing_saga_integration_2.spec.ts (1)
frontend/javascripts/test/helpers/apiHelpers.ts (1)
createBucketResponseFunction
(224-272)
frontend/javascripts/test/sagas/volumetracing/volumetracing_saga_integration_1.spec.ts (1)
frontend/javascripts/test/helpers/apiHelpers.ts (1)
createBucketResponseFunction
(224-272)
frontend/javascripts/test/sagas/volumetracing/bucket_eviction_without_saving.spec.ts (1)
frontend/javascripts/test/helpers/apiHelpers.ts (1)
createBucketResponseFunction
(224-272)
frontend/javascripts/test/model/binary/layers/wkstore_adapter.spec.ts (3)
frontend/javascripts/types/api_types.ts (1)
APIDataLayer
(113-113)frontend/javascripts/test/fixtures/hybridtracing_object.ts (1)
colorLayer
(12-31)frontend/javascripts/viewer/model/bucket_data_handling/wkstore_adapter.ts (1)
requestWithFallback
(95-170)
frontend/javascripts/test/sagas/volumetracing/volumetracing_remote_bucket_updates.spec.ts (1)
frontend/javascripts/test/helpers/apiHelpers.ts (1)
createBucketResponseFunction
(224-272)
frontend/javascripts/test/helpers/apiHelpers.ts (2)
frontend/javascripts/types/api_types.ts (1)
ElementClass
(35-46)frontend/javascripts/viewer/model/helpers/typed_buffer.ts (2)
TypedArrayConstructor
(4-13)getConstructorForElementClass
(15-51)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build-smoketest-push
- GitHub Check: backend-tests
🔇 Additional comments (14)
frontend/javascripts/test/model/binary/layers/wkstore_adapter.spec.ts (3)
2-2
: Good call: use per-layer byte width instead of fixed sizesImporting getByteCountFromLayer ensures the test data reflects dtype (e.g., uint32 vs uint8). This aligns the test with the new validation logic.
16-16
: Appropriate use of shared constantsImporting Constants (for BUCKET_SIZE) removes magic numbers and keeps the test consistent with production defaults.
87-91
: Sanity check on color layer looks goodVerifying name/category/bit depth is a solid invariant check before deeper tests run.
frontend/javascripts/test/sagas/volumetracing/bucket_eviction_without_saving.spec.ts (1)
19-20
: LGTM: test helper migration to per-layer descriptorsSwitching to
{ volumeTracingId: "uint16", color: "uint8" }
aligns the spec with the new helper API and mixed-layer scenarios. No issues spotted.frontend/javascripts/test/sagas/proofreading.spec.ts (1)
65-65
: Switched to per-layer dtype map — aligned with new createBucketResponseFunction signatureProviding
{ color: "uint8", segmentation: "uint16" }
matches the helper’s expectedRecord<string, ElementClass>
and should prevent “Layer Type … not provided” errors for these two layers. Looks good.frontend/javascripts/test/api/api_volume_latest.spec.ts (1)
1-5
: Importing createBucketResponseFunction — consistent with new mocking approachThe import changes are correct and consistent with the updated test helper API.
frontend/javascripts/test/sagas/volumetracing/bucket_eviction_helper.ts (1)
35-39
: Good: per-layer dtype map covers all likely requestsIncluding
{ volumeTracingId: "uint16", color: "uint8", segmentation: "uint16" }
avoids missing-type errors during the heavy labeling flow. Looks solid.frontend/javascripts/test/sagas/volumetracing/volumetracing_saga_integration_1.spec.ts (2)
54-55
: Switch to per-layer descriptor for createBucketResponseFunction: looks goodPassing
{ volumeTracingId: "uint16", color: "uint8" }
aligns with the new helper signature and clarifies intent per layer. No issues spotted.
113-114
: Layer name consistency verified
- The
volumetracing_server_objects.ts
fixture definesTRACING_ID = "volumeTracingId"
and exposes it on eachannotationLayer.tracingId
field (frontend/javascripts/test/fixtures/volumetracing_server_objects.ts lines 8–16).- All volumetracing saga tests invoke
createBucketResponseFunction({ volumeTracingId: "uint16", color: "uint8" }, …)
(e.g., frontend/javascripts/test/sagas/volumetracing/volumetracing_saga_integration_1.spec.ts lines 113–114), matching the fixture layer key.- The viewer’s data API builds bucket URLs via
${dataset.dataStore.url}/data/datasets/${…}/layers/${layerName}/data?…
wherelayerName
is sourced fromgetVolumeTracingLayerIds()
—i.e., thetracingId
field ("volumeTracingId"
)—ensuring full alignment between tests and runtime endpoints.No discrepancies found; no changes needed.
frontend/javascripts/test/sagas/volumetracing/volumetracing_saga_integration_2.spec.ts (1)
49-50
: Descriptor-based mock update: OKUsing
{ volumeTracingId: "uint16", color: "uint8" }
matches the refactored helper. Good update.frontend/javascripts/test/helpers/apiHelpers.ts (4)
53-55
: New types/imports fit the refactorImporting ElementClass and getConstructorForElementClass is appropriate for per-layer dtype handling.
83-85
: Default mock wiring is sensibleProviding a default mapping
{ color: "uint8", segmentation: "uint16" }
for tests that don’t override the mock keeps backwards compatibility.
224-229
: Function signature: clear and future-proofSwitching to
dataTypePerLayer: Record<string, ElementClass>
is the right abstraction for multi-layer scenarios.
232-243
: Confirmed: “double”/Float64Array is intentionally unsupported
The WebKnossos elementClass only includes uint8, uint16, uint32, and float32. An ElementClass of"double"
(Float64Array) isn’t documented or implemented, sogetConstructorForElementClass
throwing for"double"
is the expected behavior. No changes to this helper are required unless future support forfloat64
is added.
it<TestContext>("requestWithFallback: Request Handling: should pass the correct request parameters", async ({ | ||
layer, | ||
colorLayer, | ||
}) => { | ||
const { batch } = prepare(); | ||
const { batch } = prepare(colorLayer); | ||
const expectedUrl = "url/data/datasets/datasetId/layers/color/data?token=token2"; | ||
const expectedOptions = createExpectedOptions(); | ||
|
||
await requestWithFallback(layer, batch).then(() => { | ||
await requestWithFallback(colorLayer, batch).then(() => { | ||
const RequestMock = vi.mocked(Request); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Expected URL uses token2 but no refresh occurs in this test scenario
In this test, Request.receiveJSON isn’t configured to return a second token. With the current setup (no 403), requestWithFallback should use "token". The assertion for token2 will fail spuriously.
Apply this diff:
- const expectedUrl = "url/data/datasets/datasetId/layers/color/data?token=token2";
+ const expectedUrl = "url/data/datasets/datasetId/layers/color/data?token=token";
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
it<TestContext>("requestWithFallback: Request Handling: should pass the correct request parameters", async ({ | |
layer, | |
colorLayer, | |
}) => { | |
const { batch } = prepare(); | |
const { batch } = prepare(colorLayer); | |
const expectedUrl = "url/data/datasets/datasetId/layers/color/data?token=token2"; | |
const expectedOptions = createExpectedOptions(); | |
await requestWithFallback(layer, batch).then(() => { | |
await requestWithFallback(colorLayer, batch).then(() => { | |
const RequestMock = vi.mocked(Request); | |
it<TestContext>("requestWithFallback: Request Handling: should pass the correct request parameters", async ({ | |
colorLayer, | |
}) => { | |
const { batch } = prepare(colorLayer); | |
const expectedUrl = "url/data/datasets/datasetId/layers/color/data?token=token"; | |
const expectedOptions = createExpectedOptions(); | |
await requestWithFallback(colorLayer, batch).then(() => { | |
const RequestMock = vi.mocked(Request); |
🤖 Prompt for AI Agents
In frontend/javascripts/test/model/binary/layers/wkstore_adapter.spec.ts around
lines 197 to 205, the test asserts a URL containing "token2" but the mocked
Request.receiveJSON does not simulate a token refresh (no 403 path), so the test
should expect the original token; update the expectedUrl to use "token" (or
alternatively mock Request.receiveJSON to return a refreshed token and simulate
the 403 path) so the assertion matches the actual test setup.
} else { | ||
this.handleBucket(bucket, bucketBuffer); | ||
if (bucketBuffer == null && !renderMissingDataBlack) { | ||
bucket.markAsFailed(true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the difference between bucket.markAsFailed(true)
and failedBucketAddresses.push(bucketAddress)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
failedBucketAddresses
is an array of bucket addresses which stores buckets which have no / broken data and thus should be immediately re-requested. This is done a few lines below: lines 121ff.
failedBucketAddresses = failedBucketAddresses.length === 0 ? batch : failedBucketAddresses;
for (const bucketAddress of failedBucketAddresses) {
const bucket = this.cube.getBucket(bucketAddress);
if (bucket.type === "data") {
bucket.markAsFailed(false);
if (bucket.dirty) {
bucket.addToPullQueueWithHighestPriority();
}
}
}
And bucket.markAsFailed(true)
sets the state of the data bucket in the FE to "missing" leading to some other side effects (if the user still anotated some stuff while the bucket was loaded, it will be added to the push queue). At least thats how I read the code.
…ect-unexpected-bucket-lengths
…eminds/webknossos into reject-unexpected-bucket-lengths
This PR make bucket requests more resilient. It auto rejects requests with an unexpected returned data length. A lot of fixing the tests was needed to get this working, but should be fine now.
URL of deployed dev instance (used for testing):
Steps to test:
TODOs:
createBucketResponseFunction
does not respect the dtype (e.g., 32 bit layers need 4 times the amount of bytes)Issues:
(Please delete unneeded items, merge only when none are left open)
$PR_NUMBER.md
file inunreleased_changes
or use./tools/create-changelog-entry.py
)