-
Notifications
You must be signed in to change notification settings - Fork 29
Enable custom AI Instance Segmentation jobs #8849
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughAdds instance-model training and inference: backend parameter types and controller actions, new JobCommand values, routes, DB worker enablement, AiModel JSON category, and a large frontend refactor replacing a monolithic AI job UI with modular ai_job_modals components and forms. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Assessment against linked issues
Assessment against linked issues: Out-of-scope changes
Possibly related PRs
Suggested labels
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. 📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 💡 Knowledge Base configuration:
You can enable these sources in your CodeRabbit configuration. 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (2)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 26
🔭 Outside diff range comments (6)
app/models/aimodels/AiModel.scala (1)
147-156
: Persist category as string in INSERTThe SQL interpolator likely doesn’t know how to serialize AiModelCategory; DB column appears to be text/varchar (since parse reads Option[String]). Persist the string value.
- ${a.comment}, ${a.category}, ${a.created}, ${a.modified}, ${a.isDeleted} + ${a.comment}, ${a.category.map(_.toString)}, ${a.created}, ${a.modified}, ${a.isDeleted}frontend/javascripts/types/api_types.ts (1)
831-842
: Make AiModel.category nullable/optional to match backend OptionBackend emits category as Option[AiModelCategory]. Frontend type must tolerate null/undefined to handle legacy models and gradual rollout.
export type AiModel = { readonly id: string; readonly name: string; readonly isOwnedByUsersOrganization: boolean; readonly sharedOrganizationIds: string[] | null | undefined; readonly dataStore: APIDataStore; readonly user: APIUser | null | undefined; readonly comment: string; readonly created: number; readonly trainingJob: APIJob | null; - readonly category: APIAiModelCategory; + readonly category: APIAiModelCategory | null | undefined; };frontend/javascripts/admin/job/job_list_view.tsx (1)
399-406
: Missing action message for TRAIN_INSTANCE_MODELWhen training instance models succeeds, the user should see the same guidance as for neuron models.
Apply this diff:
- } else if ( - job.type === APIJobType.TRAIN_NEURON_MODEL || - job.type === APIJobType.DEPRECATED_TRAIN_MODEL - ) { + } else if ( + job.type === APIJobType.TRAIN_NEURON_MODEL || + job.type === APIJobType.TRAIN_INSTANCE_MODEL || + job.type === APIJobType.DEPRECATED_TRAIN_MODEL + ) { return ( <span> {job.state === "SUCCESS" && "The model may now be selected from the “AI Analysis“ button when viewing a dataset."} </span> );frontend/javascripts/admin/voxelytics/ai_model_list_view.tsx (2)
174-181
: Fix potential stale state update when appending annotations.Use a functional state update to avoid race conditions when multiple additions happen quickly.
- onAddAnnotationsInfos={(newItems) => { - setAnnotationInfosForAiJob([...annotationInfosForAiJob, ...newItems]); - }} + onAddAnnotationsInfos={(newItems) => { + setAnnotationInfosForAiJob((prev) => [...prev, ...newItems]); + }}
267-298
: AntD Modal prop “onClose” is not supported; use “afterClose” or remove.Passing an unknown prop likely triggers a TS error and is ignored at runtime.
- <Modal + <Modal title={`Edit Organizations with Access to AI Model ${model.name}`} open onOk={submitNewSharedOrganizations} onCancel={onClose} - onClose={onClose} + afterClose={onClose} maskClosable={false} width={800} >app/controllers/AiModelController.scala (1)
149-166
: Critical: YAML-only path dereferences a missing annotation — fix requiredConfirmed: the controller allows workflowYaml to satisfy the validation but then unconditionally reads the first annotation (headOption.toFox), which will fail when trainingAnnotations is empty.
- File: app/controllers/AiModelController.scala — lines ~149–166
Fast safe fix (require at least one annotation):
- _ <- Fox - .fromBool(trainingAnnotations.nonEmpty || request.body.workflowYaml.isDefined) ?~> "aiModel.training.zeroAnnotations" - firstAnnotationId <- trainingAnnotations.headOption.map(_.annotationId).toFox + _ <- Fox + .fromBool(trainingAnnotations.nonEmpty) ?~> "aiModel.training.zeroAnnotations" + firstAnnotationId <- trainingAnnotations.headOption.map(_.annotationId).toFoxAlternative (support YAML-only jobs): extend RunNeuronModelTrainingParameters (or request body) with a dataStore/dataset descriptor (e.g., dataStoreName) and branch:
- If trainingAnnotations.nonEmpty → derive dataset/dataStore from first annotation (current behavior).
- Else → use provided dataStoreName to resolve dataset/dataStore and submit the job.
Do you want me to implement the quick change or draft the larger controller+types+frontend refactor?
🧹 Nitpick comments (37)
unreleased_changes/8849.md (1)
1-3
: Changelog entry: expand details and reference the linked issueThe entry is a bit terse. Consider explicitly listing the new job commands, new parameters, and linking the fixed issue to help operators and users.
Apply this diff to enrich the entry:
### Added -- Added support for training and running AI instance segmentation models from WK UI. +- Support for training and running AI instance segmentation models from the WEBKNOSSOS UI. + - New job commands: train_instance_model, infer_instances. + - Training parameter: max_distance_nm. + - Inference parameter: seed_generator_distance_threshold. + - AiModel now exposes a category (e.g., EM_NUCLEI vs EM_NEURONS). + - Backend/UI routes and forms added for instance-model training/inference. + - Feature flags enabled for dev: features.jobsEnabled and features.voxelyticsEnabled. + +Fixes: #8278conf/application.conf (1)
154-155
: Enabling jobs and voxelytics by default: gate via env for safer defaultsFlipping these to true unconditionally may surprise operators and CI (e.g., background workers expected, costs UI visible). Recommend keeping safe defaults and enabling via environment.
Apply this diff to make the flags opt-in via env in dev, while defaulting to false:
- jobsEnabled = true - voxelyticsEnabled = true + jobsEnabled = false + jobsEnabled = ${?WK_FEATURE_JOBS_ENABLED} + voxelyticsEnabled = false + voxelyticsEnabled = ${?WK_FEATURE_VOXELYTICS_ENABLED}Please verify that the dev deployment sets WK_FEATURE_JOBS_ENABLED=true and WK_FEATURE_VOXELYTICS_ENABLED=true (or equivalent) so the feature remains enabled where intended.
frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/collapsible_workflow_yaml_editor.tsx (1)
20-21
: Minor UX polish: Label casing and duplicate monospace styling
- Prefer “YAML” (all caps) in the label.
- You already set className="input-monospace"; the inline fontFamily is redundant and makes them diverge over time. Consider relying on the CSS class only.
- <FormItem name="workflowYaml" label="Workflow Description (yaml)"> + <FormItem name="workflowYaml" label="Workflow Description (YAML)"> @@ - style={{ - fontFamily: 'Monaco, Consolas, "Lucida Console", "Courier New", monospace', - }} + // Rely on .input-monospace stylesheet to control font familyAlso applies to: 27-28
frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/should_use_trees_form_item.tsx (2)
10-22
: Avoid duplicated labeling: keep either Form label or Checkbox label textRight now the form item has a “Manual Matches” label and the Checkbox repeats “Use manual matches from skeleton.” This duplicates accessible labels and clutters the UI. Prefer a concise checkbox without inner text (or drop the outer label and keep the checkbox text).
- <Form.Item + <Form.Item name="useAnnotation" + initialValue={false} label={ <Space> <div style={{}}> - Manual Matches{" "} + Manual Matches{" "} <Tooltip title="Please select whether the alignment should take connected skeleton nodes between adjacent sections as alignment guideline whenever available."> <InfoCircleOutlined /> </Tooltip> </div> </Space> } valuePropName="checked" @@ - <Checkbox> Use manual matches from skeleton. </Checkbox> + <Checkbox />Also applies to: 46-47
11-11
: Cross-check with prior requirement: ensure useAnnotation is true when evaluation is donePer prior team learning, when evaluation is done, useAnnotation should be forced to true via a hidden Form.Item initialValue={true}. This component exposes a toggle. Please verify the consuming form enforces the required default/override for the “evaluation done” flow.
If needed, I can propose a pattern where this component remains user-configurable, while the parent form conditionally injects a hidden Form.Item to force useAnnotation=true for evaluation scenarios.
frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/bounding_box_selection.tsx (2)
28-31
: Value handling: prefer undefined over null for controlled Select valuesantd Select expects undefined for “no selection”. Passing null can produce warnings or inconsistent behavior.
- onSelect={setSelectedBoundingBoxId} + onChange={setSelectedBoundingBoxId} @@ - value={value} + value={value ?? undefined}
32-36
: Optional: move to data-driven options and keep JSX label + indexable search textIf you need richer search (e.g., by dimensions or volume), switch to the options prop with a computed searchText field and use it in filterOption. This avoids poking into children and keeps types clean.
I can provide a small utility getUserBoundingBoxSearchText in ai_job_modals/utils.tsx for reuse if you want to go this route.
frontend/javascripts/viewer/view/action-bar/ai_job_modals/hooks/use_currently_selected_bounding_box.ts (2)
13-15
: Type mismatch risk: selectedBoundingBoxId may be a string; also simplify with useMemoForm fields can return string values for Selects. Comparing string "3" to numeric 3 fails strict equality. Also, this is a derived value; you can compute it via useMemo instead of useEffect + useState.
-import { useEffect, useState } from "react"; +import { useMemo } from "react"; @@ - const [currentlySelectedBoundingBox, setCurrentlySelectedBoundingBox] = useState< - UserBoundingBox | undefined - >(undefined); - // userBoundingBoxes, defaultBBForLayers, layers are different objects with each calls, - // but they shouldn't be able to change while the modal is open - // biome-ignore lint/correctness/useExhaustiveDependencies: see above - useEffect(() => { - const currentSelectedLayer = layers.find((layer) => layer.name === currentlySelectedLayerName); - const indexOfLayer = currentSelectedLayer ? layers.indexOf(currentSelectedLayer) : -1; - const newCurrentlySelectedBoundingBox = isBoundingBoxConfigurable - ? userBoundingBoxes.find((bbox) => bbox.id === selectedBoundingBoxId) - : indexOfLayer >= 0 - ? defaultBBForLayers[indexOfLayer] - : undefined; - setCurrentlySelectedBoundingBox(newCurrentlySelectedBoundingBox); - }, [selectedBoundingBoxId, currentlySelectedLayerName, isBoundingBoxConfigurable]); - return currentlySelectedBoundingBox; + const selectedBoundingBoxIdNum = + typeof selectedBoundingBoxId === "string" + ? Number.parseInt(selectedBoundingBoxId, 10) + : (selectedBoundingBoxId as number | undefined); + + const currentlySelectedBoundingBox = useMemo<UserBoundingBox | undefined>(() => { + const currentSelectedLayer = layers.find((layer) => layer.name === currentlySelectedLayerName); + const indexOfLayer = currentSelectedLayer ? layers.indexOf(currentSelectedLayer) : -1; + if (isBoundingBoxConfigurable) { + return userBoundingBoxes.find((bbox) => bbox.id === selectedBoundingBoxIdNum); + } + return indexOfLayer >= 0 ? defaultBBForLayers[indexOfLayer] : undefined; + }, [ + selectedBoundingBoxIdNum, + currentlySelectedLayerName, + isBoundingBoxConfigurable, + userBoundingBoxes, + defaultBBForLayers, + layers, + ]); + return currentlySelectedBoundingBox;Also applies to: 24-31
18-21
: Document the dependency trade-offIf you intentionally avoid depending on arrays for stability reasons, document it explicitly and consider freezing inputs at callsite to guarantee reference stability during modal lifetime.
I can add a small helper freezeAtMount in the consuming form to snapshot userBoundingBoxes/defaultBBForLayers/layers when the modal opens if you want to keep the effect approach.
frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/collapsible_split_merger_evaluation_settings.tsx (2)
58-58
: Guard against invalid numeric inputThese inputs represent distances/lengths; add min={0} to prevent negative values. Optional, but prevents footguns for users and the backend.
- <InputNumber style={{ width: "100%" }} placeholder="None" /> + <InputNumber min={0} style={{ width: "100%" }} placeholder="None" />- <InputNumber style={{ width: "100%" }} placeholder="1000" /> + <InputNumber min={0} style={{ width: "100%" }} placeholder="1000" />- <InputNumber style={{ width: "100%" }} placeholder="800" /> + <InputNumber min={0} style={{ width: "100%" }} placeholder="800" />Also applies to: 66-66, 76-76
15-19
: Make Collapse fully controlled and future-proof
- Use the keys provided by onChange to derive active state.
- Use expandIcon’s isActive from panel props to avoid closures.
- Keep activeKey consistently an array for non-accordion usage.
Functionality is unchanged but more robust to future additions.
- <Collapse + <Collapse style={{ marginBottom: 8 }} - onChange={() => setActive(!isActive))} - expandIcon={() => <Checkbox checked={isActive} />} + onChange={(keys) => { + const nextActive = + Array.isArray(keys) ? keys.includes("evaluation") : keys === "evaluation"; + setActive(nextActive); + }} + expandIcon={({ isActive: panelIsActive }) => <Checkbox checked={panelIsActive} />} items={[- activeKey={isActive ? "evaluation" : []} + activeKey={isActive ? ["evaluation"] : []}Also applies to: 84-85
app/models/aimodels/AiModel.scala (1)
191-195
: Should category be updatable?updateOne omits category. If categories can change (e.g., migrating a model from “neuron” to “instance”), include it; otherwise, a comment clarifying immutability would help.
If category should be updatable:
- q"UPDATE webknossos.aiModels SET name = ${a.name}, comment = ${a.comment}, modified = ${a.modified} WHERE _id = ${a._id}".asUpdate) + q"UPDATE webknossos.aiModels SET name = ${a.name}, comment = ${a.comment}, category = ${a.category.map(_.toString)}, modified = ${a.modified} WHERE _id = ${a._id}".asUpdate)Also confirm that a DB migration added the category column to webknossos.aiModels and that nulls are allowed for legacy rows.
frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/experimental_inference_alert.tsx (1)
3-11
: Small copy nit: remove extra space; consider i18n extractionThere’s a double space after “data.” and the copy is hardcoded. Suggest trimming the spacing and moving the string into the shared messages/i18n system for consistency with the rest of the UI.
Apply this spacing fix:
- message="Please note that this feature is experimental and currently only works with electron microscopy data. If the specified bounding box is too close to the border of the dataset's bounding box, its size might be reduced automatically." + message="Please note that this feature is experimental and currently only works with electron microscopy data. If the specified bounding box is too close to the border of the dataset's bounding box, its size might be reduced automatically."frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/mag_slider.tsx (2)
21-24
: Improve tooltip accuracy and avoid variable shadowing in onChange
- The tooltip formatter ignores the index argument and relies on the outer value prop; use the provided index for accurate live feedback.
- Rename the onChange parameter to index and type it to avoid shadowing/confusion.
- tooltip={{ - formatter: () => value.join("-"), - }} + tooltip={{ + formatter: (idx) => + typeof idx === "number" && allMags[idx] + ? allMags[idx][1].join("-") + : undefined, + }} @@ - onChange={(value) => onChange(allMags[value][1])} + onChange={(index: number) => onChange(allMags[index][1])}Also applies to: 32-34
16-21
: Guard against empty magnification lists and make selected index derivation explicitWhile MagInfo typically exposes at least one magnification and value is expected to be present (per our past learnings), a defensive guard avoids edge-case UI breakage (min > max or out-of-bounds). Also, making the derived slider value explicit improves readability.
export function MagSlider({ @@ // Use `getMagsWithIndices` because returns a sorted list const allMags = magnificationInfo.getMagsWithIndices(); + // Defensive guard: no magnifications available + if (allMags.length === 0) return null; + + const selectedIndex = allMags.findIndex(([, v]) => V3.equals(v, value)); + const clampedIndex = clamp(0, selectedIndex, allMags.length - 1); + return ( <Slider @@ - min={0} - max={allMags.length - 1} + min={0} + max={allMags.length - 1} step={1} - value={clamp( - 0, - allMags.findIndex(([, v]) => V3.equals(v, value)), - allMags.length - 1, - )} + value={clampedIndex}Also applies to: 24-31
frontend/javascripts/viewer/view/action_bar_view.tsx (1)
318-324
: AI Analysis gating may hide new instance-seg workflowsThe AI button is enabled only if workers support a fixed set of jobs. With instance segmentation and custom model inference added, consider including the corresponding API job types (e.g., INFER_WITH_MODEL or instance inference type) so the button isn't hidden when only those are available.
Suggested change:
const jobsEnabled = dataset.dataStore.jobsSupportedByAvailableWorkers.includes(APIJobType.INFER_NEURONS) || dataset.dataStore.jobsSupportedByAvailableWorkers.includes(APIJobType.INFER_MITOCHONDRIA) || dataset.dataStore.jobsSupportedByAvailableWorkers.includes(APIJobType.INFER_NUCLEI) || + dataset.dataStore.jobsSupportedByAvailableWorkers.includes(APIJobType.INFER_WITH_MODEL) || dataset.dataStore.jobsSupportedByAvailableWorkers.includes(APIJobType.ALIGN_SECTIONS);
Please verify the exact enum name(s) for the new instance inference/custom model job type(s) and adjust accordingly.
frontend/javascripts/viewer/view/action-bar/ai_job_modals/tabs/alignment_tab.tsx (1)
19-23
: Lazy-load the image to avoid blocking the modal renderAdd loading="lazy" to the image to improve perceived performance when opening the modal.
<img src={`/assets/images/${jobNameToImagePath.align_sections}`} alt={"Example of improved alignment of slices"} style={centerImageStyle} + loading="lazy" />
frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/nuclei_detection_form.tsx (1)
19-22
: Unify wording: “detection” vs “segmentation”The button says “Start AI nuclei detection” while the title says “AI Nuclei Segmentation.” Consider using one term consistently to avoid user confusion.
- buttonLabel="Start AI nuclei detection" + buttonLabel="Start AI nuclei segmentation"frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/mitochondria_segmentation_form.tsx (1)
26-40
: Optional UX: provide feedback when the bounding box is missing/too small.Currently the function returns early with no user-facing hint. Consider showing a toast or form validation message so users know why submission didn’t proceed.
frontend/javascripts/viewer/view/action-bar/ai_job_modals/start_ai_job_modal.tsx (1)
44-57
: Prevent accidental modal closure.Consider disabling mask-click close to avoid losing user input in forms.
return aIJobModalState !== "invisible" ? ( <Modal width={875} open + maskClosable={false} title={
frontend/javascripts/admin/voxelytics/ai_model_list_view.tsx (1)
275-279
: Nit: grammar in helper text.Minor copy fix.
- Select all organization that should have access to the AI model{" "} + Select all organizations that should have access to the AI model{" "}frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/custom_ai_model_inference_form.tsx (3)
89-95
: Use a neutral jobName for custom model inference (avoid neuron-specific labeling/cost).Passing APIJobType.INFER_NEURONS to StartJobForm for custom-model inferences is misleading (and may affect UI elements that depend on jobName, e.g., image/copy/credits). Prefer a neutral key (you already have jobNameToImagePath.infer_with_model).
Apply this diff:
- jobName={APIJobType.INFER_NEURONS} + jobName="infer_with_model"
116-125
: Tighten validation for seedGeneratorDistanceThreshold.Add explicit numeric/min validation at the form-rule level to guard against non-numeric values and enforce the lower bound in validation (not just via InputNumber UI).
Apply this diff:
- <Form.Item + <Form.Item name="seedGeneratorDistanceThreshold" label="Seed generator distance threshold (nm)" tooltip="Controls the minimum distance in nanometers between generated seeds." - rules={[{ required: true, message: "Please enter positive number" }]} + rules={[ + { required: true, message: "Please enter a positive number" }, + { type: "number", min: 0.1, message: "Value must be ≥ 0.1 nm" }, + ]} initialValue={1000.0} > - <InputNumber min={0.1} suffix="nm" /> + <InputNumber min={0.1} step={50} suffix="nm" /> </Form.Item>
110-115
: Allow clearing model selection and reset instance-mode toggle.Minor UX polish: enabling allowClear on Select helps users back out of a choice; on clear, reset the instance-model flag to keep the form consistent.
Apply this diff:
- <Select + <Select loading={isLoading} options={aiModels.map((aiModel) => ({ value: aiModel.id, label: aiModel.name }))} onSelect={handleOnSelect} + allowClear + onClear={() => setIsInstanceModelSelected(false)} />Also applies to: 77-84
frontend/javascripts/viewer/view/action-bar/ai_job_modals/tabs/run_ai_model_tab.tsx (1)
40-57
: Superuser-only switch: redundant disabled prop.You already render the Switch only if isSuperUser is true. The additional disabled={!isSuperUser} is redundant.
Apply this diff:
- <Switch + <Switch checkedChildren="Custom" unCheckedChildren="Default" checked={showCustomAiModels} - disabled={!isSuperUser} style={{ marginBottom: 6, }} onChange={(bool) => { setShowCustomAiModels(bool); }} />frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/bounding_box_selection_form_item.tsx (2)
70-74
: Generalize neuron-specific error copy for reusability across jobs.This component is reused by multiple AI jobs (neurons, mitochondria, custom models). The error message should be job-agnostic.
Apply this diff:
- rejectionReason = `The volume of the selected bounding box is too large. The AI neuron segmentation trial is only supported for up to ${ + rejectionReason = `The volume of the selected bounding box is too large. This AI job is only supported for up to ${ features().exportTiffMaxVolumeMVx } Megavoxels. Additionally, no bounding box edge should be longer than ${ features().exportTiffMaxEdgeLengthVx }vx.`;
20-27
: Prefer passing the selected layer’s mag into this component.Validation depends on mag. Using the first color layer is brittle when the user picks a different color layer in StartJobForm. Consider extending props to accept magForValidation (Vector3) and use it when provided.
For example:
type BoundingBoxSelectionProps = { isBoundingBoxConfigurable?: boolean; userBoundingBoxes: UserBoundingBox[]; isSuperUser: boolean; showVolume: boolean; onChangeSelectedBoundingBox: (bBoxId: number | null) => void; value: number | null; + magForValidation?: [number, number, number]; }; export function BoundingBoxSelectionFormItem({ isBoundingBoxConfigurable, userBoundingBoxes, isSuperUser, showVolume = false, onChangeSelectedBoundingBox, value: selectedBoundingBoxId, + magForValidation, }: BoundingBoxSelectionProps): JSX.Element { const dataset = useWkSelector((state) => state.dataset); - const colorLayers = getColorLayers(dataset); - const mag1 = colorLayers?.[0]?.resolutions?.[0]; + const defaultMag1 = getColorLayers(dataset)?.[0]?.resolutions?.[0]; + const mag1 = magForValidation ?? defaultMag1;Then pass magForValidation from StartJobForm based on the currently selected color layer.
Also applies to: 32-34
frontend/javascripts/viewer/view/action-bar/ai_job_modals/materialize_volume_annotation_modal.tsx (1)
41-54
: Minor: consider deriving jobName from APIJobType for consistency.You define jobName as a string to index jobNameToImagePath, while StartJobForm uses APIJobType.MATERIALIZE_VOLUME_ANNOTATION. To avoid drift, consider deriving the key from APIJobType if those values match the mapping keys.
No code change required if APIJobType.MATERIALIZE_VOLUME_ANNOTATION already equals "materialize_volume_annotation".
Also applies to: 69-81
frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/start_job_form.tsx (2)
204-205
: Consider using a descriptive suffixThe dataset suffix uses "with_" prefix. Consider a more direct suffix like "_inferred" or "_predicted" for consistency with other dataset naming conventions.
- initialName={`${dataset.name}_${props.suggestedDatasetSuffix}`} + initialName={`${dataset.name}${props.suggestedDatasetSuffix}`}
124-130
: Verify handling of uint24 layersThe check for uint24 layers happens within startJob, but could be performed earlier to improve user experience. Consider disabling the submit button or filtering out uint24 layers from the selection.
Add a validation rule to prevent uint24 selection:
const layers = chooseSegmentationLayer ? getSegmentationLayers(dataset) : colorLayers; +// Filter out uint24 layers that cannot be used for AI jobs +const availableLayers = layers.filter(layer => layer.elementClass !== "uint24");Then update the layers prop passed to LayerSelectionFormItem:
- layers={layers} + layers={availableLayers}frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/train_ai_model_form.tsx (2)
139-159
: Complex watcher logic could be simplifiedThe watcher function uses a ref to avoid stale closures, but this pattern is complex and could be simplified using React hooks more idiomatically.
Consider using
useMemo
with proper dependencies instead of the ref pattern:- const watcherFunctionRef = useRef(() => { - return [new MagInfo([])]; - }); - watcherFunctionRef.current = () => { + const getMagInfos = useCallback(() => { // ... existing logic - }; + }, [form, annotationInfos]); const magInfoForLayer: Array<MagInfo> = Form.useWatch(() => { - return watcherFunctionRef.current(); + return getMagInfos(); }, form);
294-299
: Complex filter logic could be clearerThe filter logic for segmentation layers is complex and could benefit from a comment explaining the intention.
+ // Exclude layers that are already annotation layers (volume tracings) .filter( (tracingId) => !annotation.annotationLayers.find( (annotationLayer) => annotationLayer.tracingId === tracingId, ), );
frontend/javascripts/admin/api/jobs.ts (1)
398-408
: Consider documenting optional fieldsThe BaseModelInferenceParameters type has several optional fields. Consider adding JSDoc comments to clarify when these fields should be provided.
export type BaseModelInferenceParameters = { + /** Optional annotation ID for inference context */ annotationId?: string; aiModelId: string; datasetDirectoryName: string; organizationId: string; colorLayerName: string; boundingBox: Vector6; newDatasetName: string; + /** Optional custom workflow YAML configuration */ workflowYaml?: string; // maskAnnotationLayerName?: string | null };frontend/javascripts/viewer/view/action-bar/ai_job_modals/utils.tsx (3)
56-66
: Inconsistent color scale for auto-generated layer bounding boxesTo avoid double-scaling, default
color
for full-layer boxes should be normalized to [1, 1, 1] if render logic multiplies by 255.Apply this diff:
- color: [255, 255, 255], + color: [1, 1, 1],Note: With the render-side normalization fix proposed above, this remains safe but still preferable for consistency in the data model.
163-169
: Simplify and type-safely access annotation IDsBoth APIAnnotation and StoreAnnotation contain an
id
. Using the"id" in annotation
fallback toannotation.annotationId
is unnecessary and brittle.Apply this diff to constrain the generic to types that provide
id
and simplify the code:-export function checkAnnotationsForErrorsAndWarnings<T extends StoreAnnotation | APIAnnotation>( +export function checkAnnotationsForErrorsAndWarnings<T extends (StoreAnnotation | APIAnnotation) & { id: string }>( annotationsWithDatasets: Array<AnnotationInfoForAITrainingJob<T>>, @@ - if (annotationsWithoutBoundingBoxes.length > 0) { - const annotationIds = annotationsWithoutBoundingBoxes.map(({ annotation }) => - "id" in annotation ? annotation.id : annotation.annotationId, - ); + if (annotationsWithoutBoundingBoxes.length > 0) { + const annotationIds = annotationsWithoutBoundingBoxes.map(({ annotation }) => annotation.id);Also applies to: 181-183
299-316
: Warning text could mislead: “not at least 32 Vx” yet treated as a warningIf boxes smaller than 32 Vx per dim severely impact training stability, consider upgrading this to an error or making the threshold configurable by job type.
app/controllers/AiModelController.scala (1)
315-317
: Minor: misleading variable name
newAiModelJs
holds an inference JSON, not a model. Rename for clarity.- newAiModelJs <- aiInferenceService.publicWrites(newAiInference, request.identity) - } yield Ok(newAiModelJs) + newAiInferenceJs <- aiInferenceService.publicWrites(newAiInference, request.identity) + } yield Ok(newAiInferenceJs)
"sharedOrganizationIds" -> sharedOrganizationIds, | ||
"category" -> aiModel.category | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Serialize category as string to match frontend contract
publicWrites currently passes an Option[AiModelCategory]. Unless an implicit Writes exists for AiModelCategory, this will either not compile or produce unexpected JSON. The frontend expects a string union.
Apply:
- "sharedOrganizationIds" -> sharedOrganizationIds,
- "category" -> aiModel.category
+ "sharedOrganizationIds" -> sharedOrganizationIds,
+ "category" -> aiModel.category.map(_.toString)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
"sharedOrganizationIds" -> sharedOrganizationIds, | |
"category" -> aiModel.category | |
) | |
"sharedOrganizationIds" -> sharedOrganizationIds, | |
"category" -> aiModel.category.map(_.toString) | |
) |
🤖 Prompt for AI Agents
In app/models/aimodels/AiModel.scala around lines 74 to 76, the current
publicWrites is passing an Option[AiModelCategory] directly which relies on an
implicit Writes for AiModelCategory and can produce wrong JSON; change the
serialization to emit the category as an Option[String] (e.g. map the category
to its string name/value) so the JSON field "category" becomes a string union
matching the frontend contract, and remove reliance on an implicit Writes for
the enum.
frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/train_ai_model_form.tsx
Outdated
Show resolved
Hide resolved
const upscaledColor = bbox.color.map((colorPart) => colorPart * 255) as any as Vector3; | ||
const colorAsHexString = rgbToHex(upscaledColor); | ||
const volumeInVx = new BoundingBox(bbox.boundingBox).getVolume(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Color scale bug: double scaling to 0–255 likely produces invalid hex colors
bbox.color
is multiplied by 255 assuming normalized [0–1] input, but getBoundingBoxesForLayers
assigns [255, 255, 255]
. This results in values like 65025 and broken colors. Normalize robustly at render time.
Apply this diff to normalize/clamp color values defensively:
- const upscaledColor = bbox.color.map((colorPart) => colorPart * 255) as any as Vector3;
- const colorAsHexString = rgbToHex(upscaledColor);
+ const clamp255 = (v: number) => Math.max(0, Math.min(255, Math.round(v)));
+ const isNormalized = bbox.color.every((c) => c <= 1);
+ const color0to255 = (isNormalized ? bbox.color.map((c) => c * 255) : bbox.color).map(clamp255) as Vector3;
+ const colorAsHexString = rgbToHex(color0to255);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const upscaledColor = bbox.color.map((colorPart) => colorPart * 255) as any as Vector3; | |
const colorAsHexString = rgbToHex(upscaledColor); | |
const volumeInVx = new BoundingBox(bbox.boundingBox).getVolume(); | |
const clamp255 = (v: number) => Math.max(0, Math.min(255, Math.round(v))); | |
const isNormalized = bbox.color.every((c) => c <= 1); | |
const color0to255 = (isNormalized ? bbox.color.map((c) => c * 255) : bbox.color).map(clamp255) as Vector3; | |
const colorAsHexString = rgbToHex(color0to255); | |
const volumeInVx = new BoundingBox(bbox.boundingBox).getVolume(); |
🤖 Prompt for AI Agents
In frontend/javascripts/viewer/view/action-bar/ai_job_modals/utils.tsx around
lines 37 to 39, the code currently multiplies bbox.color by 255 unconditionally
which double-scales colors already in 0–255 and yields invalid hex; replace that
with a defensive normalization: detect if any color component > 1 then assume
inputs are 0–255 and divide all components by 255, otherwise treat them as
already in 0–1; clamp each component to [0,1], then multiply by 255, Math.round
and clamp to [0,255], cast to Vector3 (or the expected RGB tuple type) and pass
to rgbToHex. Ensure you do not modify bounding box volume code on the next line.
export const getBestFittingMagComparedToTrainingDS = ( | ||
colorLayer: APIDataLayer, | ||
datasetScaleMag1: VoxelSize, | ||
jobType: APIJobType.INFER_MITOCHONDRIA | APIJobType.INFER_NEURONS | APIJobType.INFER_NUCLEI, | ||
) => { | ||
if (jobType === APIJobType.INFER_MITOCHONDRIA) { | ||
// infer_mitochondria_model always infers on the finest mag of the current dataset | ||
const magInfo = getMagInfo(colorLayer.resolutions); | ||
return magInfo.getFinestMag(); | ||
} | ||
const modelScale = MEAN_VX_SIZE[jobType]; | ||
let closestMagOfCurrentDS = colorLayer.resolutions[0]; | ||
let bestDifference = [ | ||
Number.POSITIVE_INFINITY, | ||
Number.POSITIVE_INFINITY, | ||
Number.POSITIVE_INFINITY, | ||
]; | ||
|
||
const datasetScaleInNm = convertVoxelSizeToUnit(datasetScaleMag1, UnitShort.nm); | ||
|
||
for (const mag of colorLayer.resolutions) { | ||
const diff = datasetScaleInNm.map((dim, i) => | ||
Math.abs(Math.log(dim * mag[i]) - Math.log(modelScale[i])), | ||
); | ||
if (bestDifference[0] > diff[0]) { | ||
bestDifference = diff; | ||
closestMagOfCurrentDS = mag; | ||
} | ||
} | ||
const maxDistance = Math.max(...bestDifference); | ||
const resultText = `Using mag [${closestMagOfCurrentDS}]. This results in an effective voxel size of [${datasetScaleInNm.map((scale, i) => Math.round(scale * closestMagOfCurrentDS[i]))}] (compared to voxel size [${modelScale.map((scale) => Math.round(scale))}] used during training).`; | ||
if (maxDistance > Math.log(2)) { | ||
Toast.warning(resultText); | ||
} else { | ||
Toast.info(resultText); | ||
console.info(resultText); | ||
} | ||
return closestMagOfCurrentDS; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mag selection only optimizes X dimension; consider all axes
The current selection minimizes diff[0] (X), ignoring Y/Z. This can pick a mag with significantly worse Y/Z scale mismatch. Use a scalar score such as max or L2 across all axes.
Apply this diff to minimize the worst-axis deviation and align the warning threshold:
- const modelScale = MEAN_VX_SIZE[jobType];
- let closestMagOfCurrentDS = colorLayer.resolutions[0];
- let bestDifference = [
- Number.POSITIVE_INFINITY,
- Number.POSITIVE_INFINITY,
- Number.POSITIVE_INFINITY,
- ];
+ const modelScale = MEAN_VX_SIZE[jobType];
+ let closestMagOfCurrentDS = colorLayer.resolutions[0];
+ let bestScore = Number.POSITIVE_INFINITY;
@@
- for (const mag of colorLayer.resolutions) {
- const diff = datasetScaleInNm.map((dim, i) =>
- Math.abs(Math.log(dim * mag[i]) - Math.log(modelScale[i])),
- );
- if (bestDifference[0] > diff[0]) {
- bestDifference = diff;
- closestMagOfCurrentDS = mag;
- }
- }
- const maxDistance = Math.max(...bestDifference);
+ for (const mag of colorLayer.resolutions) {
+ const diff = datasetScaleInNm.map((dim, i) =>
+ Math.abs(Math.log(dim * mag[i]) - Math.log(modelScale[i])),
+ );
+ const score = Math.max(...diff); // worst-axis mismatch
+ if (score < bestScore) {
+ bestScore = score;
+ closestMagOfCurrentDS = mag;
+ }
+ }
+ const maxDistance = bestScore;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
export const getBestFittingMagComparedToTrainingDS = ( | |
colorLayer: APIDataLayer, | |
datasetScaleMag1: VoxelSize, | |
jobType: APIJobType.INFER_MITOCHONDRIA | APIJobType.INFER_NEURONS | APIJobType.INFER_NUCLEI, | |
) => { | |
if (jobType === APIJobType.INFER_MITOCHONDRIA) { | |
// infer_mitochondria_model always infers on the finest mag of the current dataset | |
const magInfo = getMagInfo(colorLayer.resolutions); | |
return magInfo.getFinestMag(); | |
} | |
const modelScale = MEAN_VX_SIZE[jobType]; | |
let closestMagOfCurrentDS = colorLayer.resolutions[0]; | |
let bestDifference = [ | |
Number.POSITIVE_INFINITY, | |
Number.POSITIVE_INFINITY, | |
Number.POSITIVE_INFINITY, | |
]; | |
const datasetScaleInNm = convertVoxelSizeToUnit(datasetScaleMag1, UnitShort.nm); | |
for (const mag of colorLayer.resolutions) { | |
const diff = datasetScaleInNm.map((dim, i) => | |
Math.abs(Math.log(dim * mag[i]) - Math.log(modelScale[i])), | |
); | |
if (bestDifference[0] > diff[0]) { | |
bestDifference = diff; | |
closestMagOfCurrentDS = mag; | |
} | |
} | |
const maxDistance = Math.max(...bestDifference); | |
const resultText = `Using mag [${closestMagOfCurrentDS}]. This results in an effective voxel size of [${datasetScaleInNm.map((scale, i) => Math.round(scale * closestMagOfCurrentDS[i]))}] (compared to voxel size [${modelScale.map((scale) => Math.round(scale))}] used during training).`; | |
if (maxDistance > Math.log(2)) { | |
Toast.warning(resultText); | |
} else { | |
Toast.info(resultText); | |
console.info(resultText); | |
} | |
return closestMagOfCurrentDS; | |
}; | |
export const getBestFittingMagComparedToTrainingDS = ( | |
colorLayer: APIDataLayer, | |
datasetScaleMag1: VoxelSize, | |
jobType: APIJobType.INFER_MITOCHONDRIA | APIJobType.INFER_NEURONS | APIJobType.INFER_NUCLEI, | |
) => { | |
if (jobType === APIJobType.INFER_MITOCHONDRIA) { | |
// infer_mitochondria_model always infers on the finest mag of the current dataset | |
const magInfo = getMagInfo(colorLayer.resolutions); | |
return magInfo.getFinestMag(); | |
} | |
const modelScale = MEAN_VX_SIZE[jobType]; | |
let closestMagOfCurrentDS = colorLayer.resolutions[0]; | |
let bestScore = Number.POSITIVE_INFINITY; | |
const datasetScaleInNm = convertVoxelSizeToUnit(datasetScaleMag1, UnitShort.nm); | |
for (const mag of colorLayer.resolutions) { | |
const diff = datasetScaleInNm.map((dim, i) => | |
Math.abs(Math.log(dim * mag[i]) - Math.log(modelScale[i])), | |
); | |
const score = Math.max(...diff); // worst-axis mismatch | |
if (score < bestScore) { | |
bestScore = score; | |
closestMagOfCurrentDS = mag; | |
} | |
} | |
const maxDistance = bestScore; | |
const resultText = `Using mag [${closestMagOfCurrentDS}]. This results in an effective voxel size of [${datasetScaleInNm.map((scale, i) => Math.round(scale * closestMagOfCurrentDS[i]))}] (compared to voxel size [${modelScale.map((scale) => Math.round(scale))}] used during training).`; | |
if (maxDistance > Math.log(2)) { | |
Toast.warning(resultText); | |
} else { | |
Toast.info(resultText); | |
console.info(resultText); | |
} | |
return closestMagOfCurrentDS; | |
}; |
const minExtentInMag1 = minBBoxExtentInModelMag.map((extent, i) => | ||
Math.round(extent * mag[i]), | ||
) as Vector3; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Use ceil when converting required extents to mag1
Rounding can accept too-small boxes when required extents land between integers. Ceil enforces true minimums.
- const minExtentInMag1 = minBBoxExtentInModelMag.map((extent, i) =>
- Math.round(extent * mag[i]),
- ) as Vector3;
+ const minExtentInMag1 = minBBoxExtentInModelMag.map((extent, i) =>
+ Math.ceil(extent * mag[i]),
+ ) as Vector3;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const minExtentInMag1 = minBBoxExtentInModelMag.map((extent, i) => | |
Math.round(extent * mag[i]), | |
) as Vector3; | |
const minExtentInMag1 = minBBoxExtentInModelMag.map((extent, i) => | |
Math.ceil(extent * mag[i]), | |
) as Vector3; |
🤖 Prompt for AI Agents
In frontend/javascripts/viewer/view/action-bar/ai_job_modals/utils.tsx around
lines 119 to 121, the conversion from required extents to mag1 uses Math.round
which can produce a value smaller than required when the true extent is
fractional; replace Math.round with Math.ceil to enforce the minimum size,
keeping the same mapping and type cast to Vector3 so computed extents never
undershoot the required size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MichaelBuessemeyer This sound legit. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!
…s/mitochondria_segmentation_form.tsx Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/mitochondria_segmentation_form.tsx (1)
45-48
: Resolved: Corrected description from “neuron segmentation” to “mitochondria segmentation.”The copy now accurately reflects the job type. Thanks for addressing this.
🧹 Nitpick comments (3)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/mitochondria_segmentation_form.tsx (3)
21-23
: Unify capitalization between title and button label.Minor copy nit: title is Title Case while the button label is sentence case. Consider aligning for consistency.
Apply this if you want Title Case on both:
- buttonLabel="Start AI mitochondria segmentation" + buttonLabel="Start AI Mitochondria Segmentation"
26-30
: Don’t silently no-op when no bounding box; surface an explicit error.StartJobForm already validates required fields, but if this guard is ever hit, returning silently provides no feedback. Throwing makes the error surface via the form’s generic error handling/toast.
- if (!selectedBoundingBox) { - return; - } + if (!selectedBoundingBox) { + throw new Error("Bounding box is required to start mitochondria inference."); + }
25-25
: Optional: Provide a safe fallback if the feature flag is missing.If mitochondriaInferralCostPerGVx is undefined (e.g., older envs), the UI may render oddly. A nullish coalescing fallback keeps the UI stable.
- jobCreditCostPerGVx={mitochondriaInferralCostPerGVx} + jobCreditCostPerGVx={mitochondriaInferralCostPerGVx ?? 0}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these settings in your CodeRabbit configuration.
📒 Files selected for processing (1)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/mitochondria_segmentation_form.tsx
(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/mitochondria_segmentation_form.tsx (9)
frontend/javascripts/libs/react_hooks.ts (1)
useWkSelector
(292-294)frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/start_job_form.tsx (1)
StartJobForm
(56-246)frontend/javascripts/viewer/model/actions/ui_actions.ts (1)
setAIJobModalStateAction
(136-140)frontend/javascripts/test/fixtures/hybridtracing_object.ts (1)
colorLayer
(12-31)frontend/javascripts/libs/utils.ts (1)
computeArrayFromBoundingBox
(299-308)frontend/javascripts/viewer/view/action-bar/ai_job_modals/utils.tsx (2)
getBestFittingMagComparedToTrainingDS
(71-109)isDatasetOrBoundingBoxTooSmall
(134-153)frontend/javascripts/admin/api/jobs.ts (1)
startMitochondriaInferralJob
(314-331)frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/experimental_inference_alert.tsx (1)
ExperimentalInferenceAlert
(3-11)frontend/javascripts/viewer/view/action-bar/starting_job_modals.tsx (1)
MitochondriaSegmentationForm
(1184-1227)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: build-smoketest-push
- GitHub Check: frontend-tests
- GitHub Check: backend-tests
🔇 Additional comments (2)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/mitochondria_segmentation_form.tsx (2)
31-38
: LGTM: Correct mag calculation and size guard before job start.Using getBestFittingMagComparedToTrainingDS and isDatasetOrBoundingBoxTooSmall mirrors the pre-refactor logic and prevents invalid submissions.
1-1
: Import verified — no change requiredrest_api.ts re-exports the jobs API, and startMitochondriaInferralJob is defined and exported from admin/api/jobs.ts, so the existing import is valid.
- frontend/javascripts/admin/rest_api.ts — contains
export * from "./api/jobs";
(line ~106)- frontend/javascripts/admin/api/jobs.ts — defines
export function startMitochondriaInferralJob(...)
(line ~314)Remove the suggested diff; keep the import from "admin/rest_api" as-is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (1)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/bounding_box_selection_form_item.tsx (1)
34-36
: Guard against missing color layers/resolutions; skip validation when mag is unknownAccessing
getColorLayers(dataset)[0].resolutions[0]
can crash when color layers or resolutions are missing. Validation should pass whenmag1
can't be determined.Apply this diff:
@@ - const colorLayer = getColorLayers(dataset)[0]; - const mag1 = colorLayer.resolutions[0]; + const colorLayers = getColorLayers(dataset); + const mag1 = colorLayers?.[0]?.resolutions?.[0]; @@ const selectedBoundingBox = userBoundingBoxes.find((bbox) => bbox.id === value); if (selectedBoundingBox) { - const { isExportable } = isBoundingBoxExportable(selectedBoundingBox.boundingBox, mag1); + // If mag cannot be determined, do not block the user. + if (mag1 == null) { + return Promise.resolve(); + } + const { isExportable } = isBoundingBoxExportable(selectedBoundingBox.boundingBox, mag1); if (isExportable) { return Promise.resolve(); } return Promise.reject( new Error( - `The volume of the selected bounding box is too large. The AI neuron segmentation trial is only supported for up to ${ + `The volume of the selected bounding box is too large. The AI segmentation job is only supported for up to ${ features().exportTiffMaxVolumeMVx } Megavoxels. Additionally, no bounding box edge should be longer than ${ features().exportTiffMaxEdgeLengthVx }vx.`, ), ); }Also applies to: 46-65
🧹 Nitpick comments (9)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/hooks/use_currently_selected_bounding_box.ts (2)
15-37
: Replace derived state + effect with useMemo; simplify index lookup and harden ID handlingThis logic is pure derivation from inputs; using state + effect causes an extra render and is easier to desync. Also, use findIndex directly and coerce the watched ID to a number to avoid strict-equality mismatches when a Select passes stringified values.
Apply this diff:
@@ -import { useEffect, useState } from "react"; +import { useMemo } from "react"; @@ - const [currentlySelectedBoundingBox, setCurrentlySelectedBoundingBox] = useState< - UserBoundingBox | undefined - >(undefined); - - useEffect(() => { - const currentSelectedLayer = layers.find((layer) => layer.name === currentlySelectedLayerName); - const indexOfLayer = currentSelectedLayer ? layers.indexOf(currentSelectedLayer) : -1; - const newCurrentlySelectedBoundingBox = isBoundingBoxConfigurable - ? userBoundingBoxes.find((bbox) => bbox.id === selectedBoundingBoxId) - : indexOfLayer >= 0 - ? defaultBBForLayers[indexOfLayer] - : undefined; - setCurrentlySelectedBoundingBox(newCurrentlySelectedBoundingBox); - }, [ - selectedBoundingBoxId, - currentlySelectedLayerName, - isBoundingBoxConfigurable, - layers, - userBoundingBoxes, - defaultBBForLayers, - ]); - return currentlySelectedBoundingBox; + const currentlySelectedBoundingBox = useMemo(() => { + const indexOfLayer = layers.findIndex( + (layer) => layer.name === currentlySelectedLayerName, + ); + const selectedIdNum = + typeof selectedBoundingBoxId === "number" + ? selectedBoundingBoxId + : Number(selectedBoundingBoxId); + if (isBoundingBoxConfigurable) { + return userBoundingBoxes.find((bbox) => bbox.id === selectedIdNum); + } + return indexOfLayer >= 0 ? defaultBBForLayers[indexOfLayer] : undefined; + }, [ + selectedBoundingBoxId, + currentlySelectedLayerName, + isBoundingBoxConfigurable, + layers, + userBoundingBoxes, + defaultBBForLayers, + ]); + return currentlySelectedBoundingBox;Also applies to: 2-2
22-26
: Confirm layer-to-defaultBB alignment; index-based mapping is brittleIndexing defaultBBForLayers by the position of the layer assumes both arrays stay in lockstep. If they can diverge, consider a name-keyed map or carry the layer name in the default BBs.
Would you like me to draft a small helper that maps defaults by layer name and updates the call sites?
frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/bounding_box_selection.tsx (1)
28-31
: Nit: Narrow the options type for better IDE helpYou can type
options
asDefaultOptionType[]
directly for clarity.- const options: SelectProps["options"] = userBoundingBoxes.map((userBB) => ({ + const options: DefaultOptionType[] = userBoundingBoxes.map((userBB) => ({ value: userBB.id, label: renderUserBoundingBox(userBB, showVolume), }));frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/bounding_box_selection_form_item.tsx (3)
84-88
: Typo: “inferral” → “inference”Minor wording fix in the validation message.
- message: "Please select the bounding box for which the inferral should be computed.", + message: "Please select the bounding box for which the inference should be computed.",
10-10
: Decouple from download modal utilitiesImporting
isBoundingBoxExportable
from../../download_modal_view
creates an odd cross-feature coupling. Consider moving shared logic intoai_job_modals/utils.tsx
(or a shared utils module) and importing from there.I can extract and consolidate these utilities into
ai_job_modals/utils.tsx
and update imports across call sites if you want.
67-101
: Redundant hiding (wrapper div + Form.Item hidden)You hide the control twice (wrapper
style
andForm.Item
'shidden
). Keep thehidden
onForm.Item
and drop the wrapper’s display: none for cleaner DOM and fewer conditionals.- return ( - <div style={isBoundingBoxConfigurable ? {} : { display: "none" }}> - <Form.Item + return ( + <Form.Item label={ @@ - > - <BoundingBoxSelection + > + <BoundingBoxSelection @@ - /> - </Form.Item> - </div> + /> + </Form.Item>frontend/javascripts/viewer/view/action-bar/ai_job_modals/materialize_volume_annotation_modal.tsx (3)
35-38
: Unify fallback-layer detection to the actual propertyYou check "tracingId" presence but then read fallbackLayer. Prefer detecting via "fallbackLayer" directly for clarity and correctness if types evolve.
- const hasFallbackLayer = - fixedSelectedLayer && "tracingId" in fixedSelectedLayer - ? fixedSelectedLayer.fallbackLayer != null - : false; + const hasFallbackLayer = + fixedSelectedLayer && "fallbackLayer" in fixedSelectedLayer + ? fixedSelectedLayer.fallbackLayer != null + : false;
33-35
: Avoid “null/undefined” in the user-facing descriptiongetReadableNameOfVolumeLayer can return null. In that case, the template string would render "null". Fallback to the layer’s name to keep the copy clean.
- const readableVolumeLayerName = - fixedSelectedLayer && getReadableNameOfVolumeLayer(fixedSelectedLayer, tracing); + const readableVolumeLayerName = fixedSelectedLayer + ? getReadableNameOfVolumeLayer(fixedSelectedLayer, tracing) ?? fixedSelectedLayer.name + : null;Also applies to: 47-49
133-136
: Bounding-box configurability should reflect the selected layer, not only the preselected oneisBoundingBoxConfigurable is derived from fixedSelectedLayer (via includesEditableMapping). If the user chooses a different layer in StartJobForm, the toggle may be wrong. Consider making this reactive to the currently selected layer (e.g., by letting StartJobForm compute it based on selectedLayer or accepting a callback to derive it).
If StartJobForm already exposes the selectedLayer in a change handler, I can propose a small refactor to wire this up.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (17)
app/controllers/AiModelController.scala
(5 hunks)frontend/javascripts/admin/job/job_list_view.tsx
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/bounding_box_selection.tsx
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/bounding_box_selection_form_item.tsx
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/collapsible_split_merger_evaluation_settings.tsx
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/collapsible_workflow_yaml_editor.tsx
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/mag_slider.tsx
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/should_use_trees_form_item.tsx
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/align_sections_form.tsx
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/custom_ai_model_inference_form.tsx
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/mitochondria_segmentation_form.tsx
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/neuron_segmentation_form.tsx
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/nuclei_detection_form.tsx
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/start_job_form.tsx
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/hooks/use_currently_selected_bounding_box.ts
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/materialize_volume_annotation_modal.tsx
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/tabs/train_ai_model_tab.tsx
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (13)
- frontend/javascripts/admin/job/job_list_view.tsx
- frontend/javascripts/viewer/view/action-bar/ai_job_modals/tabs/train_ai_model_tab.tsx
- frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/mag_slider.tsx
- frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/start_job_form.tsx
- frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/align_sections_form.tsx
- frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/custom_ai_model_inference_form.tsx
- frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/collapsible_split_merger_evaluation_settings.tsx
- frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/nuclei_detection_form.tsx
- frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/should_use_trees_form_item.tsx
- frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/mitochondria_segmentation_form.tsx
- frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/neuron_segmentation_form.tsx
- frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/collapsible_workflow_yaml_editor.tsx
- app/controllers/AiModelController.scala
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2024-11-22T17:18:04.217Z
Learnt from: dieknolle3333
PR: scalableminds/webknossos#8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In `frontend/javascripts/oxalis/model/sagas/proofread_saga.ts`, when calling `getMagInfo`, the use of `volumeTracingLayer.resolutions` is intentional and should not be changed to `volumeTracingLayer.mags`.
Applied to files:
frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/bounding_box_selection_form_item.tsx
🧬 Code Graph Analysis (4)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/materialize_volume_annotation_modal.tsx (9)
frontend/javascripts/types/api_types.ts (1)
APIDataLayer
(114-114)frontend/javascripts/libs/react_hooks.ts (1)
useWkSelector
(292-294)frontend/javascripts/viewer/model/accessors/volumetracing_accessor.ts (1)
getReadableNameOfVolumeLayer
(942-949)frontend/javascripts/viewer/view/action-bar/ai_job_modals/constants.ts (1)
jobNameToImagePath
(12-20)frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/start_job_form.tsx (2)
JobApiCallArgsType
(33-39)StartJobForm
(56-262)frontend/javascripts/viewer/view/right-border-tabs/segments_tab/segments_view_helper.tsx (1)
getBaseSegmentationName
(43-48)frontend/javascripts/libs/utils.ts (1)
computeArrayFromBoundingBox
(299-308)frontend/javascripts/admin/api/jobs.ts (1)
startMaterializingVolumeAnnotationJob
(289-312)frontend/javascripts/viewer/view/action-bar/starting_job_modals.tsx (1)
MaterializeVolumeAnnotationModal
(1345-1463)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/bounding_box_selection_form_item.tsx (7)
frontend/javascripts/viewer/store.ts (1)
UserBoundingBox
(107-109)frontend/javascripts/libs/react_hooks.ts (1)
useWkSelector
(292-294)frontend/javascripts/test/fixtures/hybridtracing_object.ts (1)
colorLayer
(12-31)frontend/javascripts/viewer/model.ts (1)
getColorLayers
(86-91)app/models/analytics/AnalyticsService.scala (1)
isSuperUser
(84-89)frontend/javascripts/viewer/view/action-bar/download_modal_view.tsx (1)
isBoundingBoxExportable
(128-164)frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/bounding_box_selection.tsx (1)
BoundingBoxSelection
(7-45)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/hooks/use_currently_selected_bounding_box.ts (2)
frontend/javascripts/viewer/store.ts (1)
UserBoundingBox
(107-109)frontend/javascripts/types/api_types.ts (1)
APIDataLayer
(114-114)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/bounding_box_selection.tsx (2)
frontend/javascripts/viewer/store.ts (1)
UserBoundingBox
(107-109)frontend/javascripts/viewer/view/action-bar/ai_job_modals/utils.tsx (1)
renderUserBoundingBox
(29-54)
🪛 GitHub Check: frontend-tests
frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/bounding_box_selection.tsx
[failure] 37-37:
Type 'DefaultOptionType[]' is not assignable to type 'DefaultOptionType[][]'.
[failure] 22-22:
Property 'label' does not exist on type 'DefaultOptionType[]'.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: backend-tests
- GitHub Check: build-smoketest-push
🔇 Additional comments (2)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/materialize_volume_annotation_modal.tsx (2)
120-147
: Nice modularization and API wiringComponent structure is clear, the Modal + StartJobForm integration is idiomatic, and API parameters align with startMaterializingVolumeAnnotationJob. After addressing the above nits and merge semantics, this looks solid.
70-82
: Asset present — no action requiredFound the illustrative image in the repo; the modal's image reference should not be broken.
- frontend/javascripts/viewer/view/action-bar/ai_job_modals/constants.ts — jobNameToImagePath.materialize_volume_annotation = "materialize_volume_annotation_example.jpg"
- frontend/javascripts/viewer/view/action-bar/ai_job_modals/materialize_volume_annotation_modal.tsx — uses src={
/assets/images/${jobNameToImagePath[jobName]}
}- public/images/materialize_volume_annotation_example.jpg — actual file present
const filterOption = useCallback( | ||
(input: string, option?: SelectProps["options"]) => | ||
String(option?.label ?? "") | ||
.toLowerCase() | ||
.indexOf(input.toLowerCase()) >= 0, | ||
[], | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix type errors and broken search; filter against source data by ID
The filterOption
param type is wrong (using SelectProps["options"]
yields an array type), causing the pipeline errors. Also, filtering on option.label
fails when label is a ReactNode. Use the option's value (ID) to look up the bounding box name.
Apply this diff:
@@
-import { Select, type SelectProps } from "antd";
+import { Select, type SelectProps } from "antd";
+import type { DefaultOptionType } from "antd/es/select";
@@
- const filterOption = useCallback(
- (input: string, option?: SelectProps["options"]) =>
- String(option?.label ?? "")
- .toLowerCase()
- .indexOf(input.toLowerCase()) >= 0,
- [],
- );
+ const filterOption = useCallback(
+ (input: string, option?: DefaultOptionType) => {
+ const id = option?.value as number | undefined;
+ if (id == null) return false;
+ const bb = userBoundingBoxes.find((b) => b.id === id);
+ return (bb?.name || "").toLowerCase().includes(input.toLowerCase());
+ },
+ [userBoundingBoxes],
+ );
@@
return (
<Select
placeholder="Select a bounding box"
- optionFilterProp="children"
+ showSearch
options={options}
filterOption={filterOption}
disabled={userBoundingBoxes.length < 1}
onSelect={setSelectedBoundingBoxId}
style={style}
value={value}
/>
);
Also applies to: 33-43, 1-1
🧰 Tools
🪛 GitHub Check: frontend-tests
[failure] 22-22:
Property 'label' does not exist on type 'DefaultOptionType[]'.
🤖 Prompt for AI Agents
In
frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/bounding_box_selection.tsx
around lines 20-26 (and similarly update 33-43 and the top-of-file type
imports), the filterOption callback is typed incorrectly (SelectProps["options"]
is an array) and currently compares input against option.label which may be a
ReactNode; change the option param type to a single option
(SelectProps["options"][number] or the component's OptionType | undefined) and
instead use option?.value (the bounding-box ID) to look up the corresponding
source bounding box name, then compare input to that name (string) lowercased;
ensure you handle undefined safely and fall back to an empty string so the
search works and TypeScript types are satisfied.
@@ -0,0 +1,147 @@ | |||
import { startMaterializingVolumeAnnotationJob } from "admin/rest_api"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify import path for startMaterializingVolumeAnnotationJob
Confirm that "admin/rest_api" re-exports startMaterializingVolumeAnnotationJob; otherwise import from the canonical module to avoid runtime import errors.
Run this to confirm the actual export and re-export:
If it isn’t re-exported, switch to the canonical source:
-import { startMaterializingVolumeAnnotationJob } from "admin/rest_api";
+import { startMaterializingVolumeAnnotationJob } from "admin/api/jobs";
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Looking for the function export..."
rg -n -C2 -P 'export (?:function|const|{[^}]*})\s*startMaterializingVolumeAnnotationJob' -- frontend | sed -n '1,120p'
echo -e "\nLooking for re-exports from admin/rest_api..."
rg -n -C2 -P 'export\s*\*\s*from\s*["'\'']admin\/api\/jobs["'\'']|startMaterializingVolumeAnnotationJob' -- frontend/javascripts/admin 2>/dev/null || true
Length of output: 1107
Import from canonical module: admin/api/jobs
The function is exported from frontend/javascripts/admin/api/jobs.ts and there is no re-export from admin/rest_api — update the import to avoid runtime failures.
Files to update:
- frontend/javascripts/viewer/view/action-bar/ai_job_modals/materialize_volume_annotation_modal.tsx
- Canonical definition: frontend/javascripts/admin/api/jobs.ts (export function startMaterializingVolumeAnnotationJob)
Suggested diff:
-import { startMaterializingVolumeAnnotationJob } from "admin/rest_api";
+import { startMaterializingVolumeAnnotationJob } from "admin/api/jobs";
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
import { startMaterializingVolumeAnnotationJob } from "admin/rest_api"; | |
import { startMaterializingVolumeAnnotationJob } from "admin/api/jobs"; |
🤖 Prompt for AI Agents
In
frontend/javascripts/viewer/view/action-bar/ai_job_modals/materialize_volume_annotation_modal.tsx
around line 1, the import currently references admin/rest_api but the canonical
export of startMaterializingVolumeAnnotationJob is in
frontend/javascripts/admin/api/jobs.ts; update the import to import
startMaterializingVolumeAnnotationJob from "admin/api/jobs" (or the relative
path to frontend/javascripts/admin/api/jobs.ts according to project aliasing) so
the module resolves at runtime and avoids failures.
const volumeLayerName = | ||
"fallbackLayer" in segmentationLayer && segmentationLayer.fallbackLayer != null | ||
? getReadableNameOfVolumeLayer(segmentationLayer, tracing) | ||
: null; | ||
const baseSegmentationName = getBaseSegmentationName(segmentationLayer); | ||
const bbox = selectedBoundingBox?.boundingBox | ||
? computeArrayFromBoundingBox(selectedBoundingBox.boundingBox) | ||
: undefined; | ||
return startMaterializingVolumeAnnotationJob( | ||
dataset.id, | ||
baseSegmentationName, | ||
volumeLayerName, | ||
newDatasetName, | ||
tracing.annotationId, | ||
tracing.annotationType, | ||
isMergerModeEnabled, | ||
includesEditableMapping, | ||
bbox, | ||
); | ||
}, | ||
[dataset.id, tracing, isMergerModeEnabled, includesEditableMapping], | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure merge semantics: non-null volumeLayerName when fallback exists and compute includesEditableMapping from the actually selected layer
Two issues:
- If getReadableNameOfVolumeLayer returns null, you pass null volumeLayerName despite having a fallback, which will skip merging.
- includesEditableMapping is taken from fixedSelectedLayer, not the user-chosen selectedLayer, leading to wrong backend behavior if the user changes the layer in the form.
Fix both within jobApiCall:
- const volumeLayerName =
- "fallbackLayer" in segmentationLayer && segmentationLayer.fallbackLayer != null
- ? getReadableNameOfVolumeLayer(segmentationLayer, tracing)
- : null;
+ const hasFallback =
+ "fallbackLayer" in segmentationLayer && segmentationLayer.fallbackLayer != null;
+ const volumeLayerName = hasFallback
+ ? getReadableNameOfVolumeLayer(segmentationLayer, tracing) ?? segmentationLayer.name
+ : null;
const baseSegmentationName = getBaseSegmentationName(segmentationLayer);
const bbox = selectedBoundingBox?.boundingBox
? computeArrayFromBoundingBox(selectedBoundingBox.boundingBox)
: undefined;
+ const includesEditableMappingLocal =
+ "tracingId" in segmentationLayer
+ ? tracing.volumes.find(
+ (volume) => volume.tracingId === segmentationLayer.tracingId,
+ )?.hasEditableMapping === true
+ : false;
return startMaterializingVolumeAnnotationJob(
dataset.id,
baseSegmentationName,
volumeLayerName,
newDatasetName,
tracing.annotationId,
tracing.annotationType,
isMergerModeEnabled,
- includesEditableMapping,
+ includesEditableMappingLocal,
bbox,
);
- },
- [dataset.id, tracing, isMergerModeEnabled, includesEditableMapping],
+ },
+ [dataset.id, tracing, isMergerModeEnabled],
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const volumeLayerName = | |
"fallbackLayer" in segmentationLayer && segmentationLayer.fallbackLayer != null | |
? getReadableNameOfVolumeLayer(segmentationLayer, tracing) | |
: null; | |
const baseSegmentationName = getBaseSegmentationName(segmentationLayer); | |
const bbox = selectedBoundingBox?.boundingBox | |
? computeArrayFromBoundingBox(selectedBoundingBox.boundingBox) | |
: undefined; | |
return startMaterializingVolumeAnnotationJob( | |
dataset.id, | |
baseSegmentationName, | |
volumeLayerName, | |
newDatasetName, | |
tracing.annotationId, | |
tracing.annotationType, | |
isMergerModeEnabled, | |
includesEditableMapping, | |
bbox, | |
); | |
}, | |
[dataset.id, tracing, isMergerModeEnabled, includesEditableMapping], | |
); | |
const hasFallback = | |
"fallbackLayer" in segmentationLayer && segmentationLayer.fallbackLayer != null; | |
const volumeLayerName = hasFallback | |
? getReadableNameOfVolumeLayer(segmentationLayer, tracing) ?? segmentationLayer.name | |
: null; | |
const baseSegmentationName = getBaseSegmentationName(segmentationLayer); | |
const bbox = selectedBoundingBox?.boundingBox | |
? computeArrayFromBoundingBox(selectedBoundingBox.boundingBox) | |
: undefined; | |
const includesEditableMappingLocal = | |
"tracingId" in segmentationLayer | |
? tracing.volumes.find( | |
(volume) => volume.tracingId === segmentationLayer.tracingId, | |
)?.hasEditableMapping === true | |
: false; | |
return startMaterializingVolumeAnnotationJob( | |
dataset.id, | |
baseSegmentationName, | |
volumeLayerName, | |
newDatasetName, | |
tracing.annotationId, | |
tracing.annotationType, | |
isMergerModeEnabled, | |
includesEditableMappingLocal, | |
bbox, | |
); | |
}, | |
[dataset.id, tracing, isMergerModeEnabled], | |
); |
🤖 Prompt for AI Agents
In
frontend/javascripts/viewer/view/action-bar/ai_job_modals/materialize_volume_annotation_modal.tsx
around lines 97–118, ensure volumeLayerName is never null when a fallbackLayer
exists and compute includesEditableMapping from the actually selectedLayer used
in the form: inside jobApiCall, replace the current volumeLayerName logic with
one that, when "fallbackLayer" in segmentationLayer and
segmentationLayer.fallbackLayer != null, uses
getReadableNameOfVolumeLayer(segmentationLayer, tracing) if it returns non-null
else falls back to a stable identifier from segmentationLayer.fallbackLayer
(e.g. fallbackLayer.name or id); also compute includesEditableMapping from the
selectedLayer variable (the user-chosen layer) rather than fixedSelectedLayer,
and update the hook dependency array to include selectedLayer (and
segmentationLayer if needed) so the memoized callback updates correctly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (4)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/bounding_box_selection.tsx (1)
51-58
: Fix broken search: use correct option type and filter against source data by ID
filterOption
is typed against an array element and inspects a ReactNode's.key
with a ts-ignore. This is brittle and breaks search. UseDefaultOptionType
and resolve the bounding-box name via the option's value (ID). Also,optionFilterProp="children"
is ineffective withoptions
API.Apply this diff:
@@ -import { Select, type SelectProps } from "antd"; +import { Select, type SelectProps } from "antd"; +import type { DefaultOptionType } from "antd/es/select"; @@ - const filterOption = useCallback( - (input: string, option?: ArrayElement<SelectProps["options"]>) => - // @ts-ignore: option.label is a React component / React.Node - String(option?.label?.key ?? "") - .toLowerCase() - .indexOf(input.toLowerCase()) >= 0, - [], - ); + const filterOption = useCallback( + (input: string, option?: DefaultOptionType) => { + const id = option?.value as number | undefined; + if (id == null) return false; + const bb = userBoundingBoxes.find((b) => b.id === id); + return (bb?.name || "").toLowerCase().includes(input.toLowerCase()); + }, + [userBoundingBoxes], + ); @@ return ( <Select showSearch placeholder="Select a bounding box" - optionFilterProp="children" options={options} filterOption={filterOption} disabled={userBoundingBoxes.length < 1} - onSelect={setSelectedBoundingBoxId} + onSelect={(id) => setSelectedBoundingBoxId?.(id as number)} style={style} - value={value} + value={value ?? undefined} /> );Also applies to: 65-73
frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/annotations_csv_input.tsx (1)
93-101
: Fix Math.max on empty array: avoid -Infinity and NaN IDsWhen
userBoundingBoxes
is empty,Math.max(...[])
returns-Infinity
, leading toNaN
for the next ID. Guard the empty case.Apply this diff:
- if (annotation.task?.boundingBox) { - const largestId = Math.max(...userBoundingBoxes.map(({ id }) => id)); + if (annotation.task?.boundingBox) { + const largestId = + userBoundingBoxes.length > 0 + ? Math.max(...userBoundingBoxes.map(({ id }) => id)) + : 0; userBoundingBoxes.push({ name: "Task Bounding Box", boundingBox: Utils.computeBoundingBoxFromBoundingBoxObject(annotation.task.boundingBox), color: [0, 0, 0], isVisible: true, id: largestId + 1, }); }frontend/javascripts/viewer/view/action-bar/ai_job_modals/utils.tsx (2)
49-76
: Improve mag selection: consider all axes, not just XThe current comparison only checks
diff[0]
, which can pick a mag with poor Y/Z match. Minimize the worst-axis deviation instead.Apply this diff:
const modelScale = MEAN_VX_SIZE[jobType]; let closestMagOfCurrentDS = colorLayer.resolutions[0]; - let bestDifference = [ - Number.POSITIVE_INFINITY, - Number.POSITIVE_INFINITY, - Number.POSITIVE_INFINITY, - ]; + let bestScore = Number.POSITIVE_INFINITY; @@ for (const mag of colorLayer.resolutions) { const diff = datasetScaleInNm.map((dim, i) => Math.abs(Math.log(dim * mag[i]) - Math.log(modelScale[i])), ); - if (bestDifference[0] > diff[0]) { - bestDifference = diff; - closestMagOfCurrentDS = mag; - } + const score = Math.max(...diff); // worst-axis mismatch + if (score < bestScore) { + bestScore = score; + closestMagOfCurrentDS = mag; + } } - const maxDistance = Math.max(...bestDifference); + const maxDistance = bestScore;
87-90
: Use ceil when computing minimum required extents
Math.round
can undershoot the minimum.Math.ceil
enforces the true minimum size.Apply this diff:
- const minExtentInMag1 = minBBoxExtentInModelMag.map((extent, i) => - Math.round(extent * mag[i]), - ) as Vector3; + const minExtentInMag1 = minBBoxExtentInModelMag.map((extent, i) => + Math.ceil(extent * mag[i]), + ) as Vector3;
🧹 Nitpick comments (8)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/bounding_box_selection.tsx (1)
60-63
: Avoid using React element keys as dataRelying on
label.key
for search is an implementation detail of React and unstable. The suggested change above removes that dependency and uses the option value (ID) to resolve the name.frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/annotations_csv_input.tsx (2)
65-71
: Prefer enum over string literal for layer typeUse
AnnotationLayerEnum.Volume
for consistency and type-safety.Apply this diff:
- annotation.annotationLayers - .filter((layer) => layer.typ === "Volume") + annotation.annotationLayers + .filter((layer) => layer.typ === AnnotationLayerEnum.Volume)
59-114
: Consider per-line error handling to improve UXA single failing fetch inside
Promise.all
rejects the entire operation. If feasible, catch per-annotation errors and surface them (e.g., via Toast), while continuing with valid rows.If you want, I can draft a resilient loader that aggregates successes and reports failures per line.
frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/train_ai_model_form.tsx (3)
60-70
: Fix typo in tooltip: “neuclei” → “nuclei”Small spelling correction in user-facing copy.
Apply this diff:
- "The model category determines the type of object that is segmented. Neuron models are suitable for segmenting neurons in EM tissue. The other model category is suitable for segmenting any non-neuron object, e.g. neuclei, vesicles, etc. The workflows are optimized for EM data, e.g. from FIB-SEM, MSEM, Serial-Section SEM etc" + "The model category determines the type of object that is segmented. Neuron models are suitable for segmenting neurons in EM tissue. The other model category is suitable for segmenting any non-neuron object, e.g. nuclei, vesicles, etc. The workflows are optimized for EM data, e.g. from FIB-SEM, MSEM, Serial-Section SEM etc"
84-104
: Prop typing: allow undefined during first render
Form.useWatch("modelCategory", form)
can be undefined on initial render. Widen the prop type to avoid TS friction and runtime surprises.Apply this diff:
-const AiInferenceOptionsFormItems = ({ - selectedModelCategory, -}: { selectedModelCategory: APIAiModelCategory }) => { +const AiInferenceOptionsFormItems = ({ + selectedModelCategory, +}: { selectedModelCategory?: APIAiModelCategory }) => { // TODO: It would be great to have several presets to choose from. The Antd <AutoComplete> component did not work well for this with antd v5.22 or 5.27 - return selectedModelCategory === APIAiModelCategory.EM_NUCLEI ? ( + return selectedModelCategory === APIAiModelCategory.EM_NUCLEI ? ( <Col span={6}>
208-226
: Typo: commonJobArgmuments → commonJobArgumentsMinor naming fix to improve readability and avoid future confusion.
Apply this diff:
- const commonJobArgmuments = { + const commonJobArguments = { trainingAnnotations: getTrainingAnnotations(values), name: values.modelName, workflowYaml: useCustomWorkflow ? values.workflowYaml : undefined, comment: values.comment, }; @@ - ...commonJobArgmuments, + ...commonJobArguments, }); } else { await runNeuronTraining({ aiModelCategory: APIAiModelCategory.EM_NEURONS, - ...commonJobArgmuments, + ...commonJobArguments, });frontend/javascripts/viewer/view/action-bar/ai_job_modals/utils.tsx (2)
24-33
: Nit: avoid id 0 for synthetic boxes
id: -1 * index
yields0
for the first item. If other code treats0
as falsy/no-selection, this can be confusing. Consider starting at-1 - index
.Apply this diff:
- id: -1 * index, + id: -1 - index,
14-22
: Consider making return type explicit for getMinimumDSSizeIf new job types are added, falling through could return
undefined
. Adding an explicit return type and exhaustive handling (or a default) can prevent subtle bugs.I can provide an exhaustive guard with
never
if you prefer.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (6)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/annotations_csv_input.tsx
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/bounding_box_selection.tsx
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/neuron_segmentation_form.tsx
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/nuclei_detection_form.tsx
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/train_ai_model_form.tsx
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/utils.tsx
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/neuron_segmentation_form.tsx
- frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/nuclei_detection_form.tsx
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2024-11-22T17:18:04.217Z
Learnt from: dieknolle3333
PR: scalableminds/webknossos#8168
File: frontend/javascripts/oxalis/model/sagas/proofread_saga.ts:1039-1039
Timestamp: 2024-11-22T17:18:04.217Z
Learning: In `frontend/javascripts/oxalis/model/sagas/proofread_saga.ts`, when calling `getMagInfo`, the use of `volumeTracingLayer.resolutions` is intentional and should not be changed to `volumeTracingLayer.mags`.
Applied to files:
frontend/javascripts/viewer/view/action-bar/ai_job_modals/utils.tsx
🧬 Code Graph Analysis (4)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/annotations_csv_input.tsx (7)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/utils.tsx (1)
AnnotationInfoForAITrainingJob
(123-129)frontend/javascripts/types/api_types.ts (2)
APIAnnotation
(588-590)ServerVolumeTracing
(972-999)frontend/javascripts/admin/api/tasks.ts (1)
getAnnotationsForTask
(29-34)frontend/javascripts/admin/rest_api.ts (2)
getUnversionedAnnotationInformation
(610-621)getTracingForAnnotationType
(732-773)frontend/javascripts/viewer/model/reducers/volumetracing_reducer.ts (1)
serverVolumeToClientVolumeTracing
(268-324)frontend/javascripts/viewer/model/reducers/reducer_helpers.ts (1)
convertUserBoundingBoxesFromServerToFrontend
(68-85)frontend/javascripts/viewer/constants.ts (1)
Vector3
(14-14)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/train_ai_model_form.tsx (11)
frontend/javascripts/viewer/constants.ts (1)
Vector3
(14-14)frontend/javascripts/types/api_types.ts (3)
APIDataLayer
(114-114)APIAnnotation
(588-590)APIDataset
(242-245)frontend/javascripts/viewer/store.ts (1)
StoreAnnotation
(219-224)frontend/javascripts/viewer/view/action-bar/ai_job_modals/utils.tsx (3)
AnnotationInfoForAITrainingJob
(123-129)checkAnnotationsForErrorsAndWarnings
(131-160)checkBoundingBoxesForErrorsAndWarnings
(163-287)frontend/javascripts/viewer/model.ts (2)
getColorLayers
(86-91)getSegmentationLayers
(93-98)frontend/javascripts/admin/api/jobs.ts (2)
runInstanceModelTraining
(391-396)runNeuronTraining
(375-380)app/controllers/AiModelController.scala (1)
runNeuronTraining
(142-184)frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/annotations_csv_input.tsx (1)
AnnotationsCsvInput
(20-160)frontend/javascripts/components/layer_selection.tsx (2)
LayerSelection
(14-60)LayerSelectionFormItem
(62-94)frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/collapsible_workflow_yaml_editor.tsx (1)
CollapsibleWorkflowYamlEditor
(7-40)frontend/javascripts/libs/format_utils.ts (1)
formatVoxels
(515-538)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/bounding_box_selection.tsx (5)
frontend/javascripts/viewer/store.ts (1)
UserBoundingBox
(107-109)frontend/javascripts/viewer/constants.ts (1)
Vector3
(14-14)frontend/javascripts/libs/utils.ts (2)
rgbToHex
(215-217)computeArrayFromBoundingBox
(299-308)frontend/javascripts/libs/format_utils.ts (1)
formatVoxels
(515-538)frontend/javascripts/types/globals.d.ts (1)
ArrayElement
(35-35)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/utils.tsx (7)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/constants.ts (3)
ModalJobTypes
(4-7)MIN_BBOX_EXTENT
(25-29)MEAN_VX_SIZE
(31-35)frontend/javascripts/types/api_types.ts (3)
APIDataLayer
(114-114)APIDataset
(242-245)APIAnnotation
(588-590)frontend/javascripts/viewer/store.ts (3)
UserBoundingBox
(107-109)VolumeTracing
(191-212)StoreAnnotation
(219-224)frontend/javascripts/libs/utils.ts (2)
computeBoundingBoxFromBoundingBoxObject
(281-285)computeArrayFromBoundingBox
(299-308)frontend/javascripts/viewer/model/scaleinfo.ts (1)
convertVoxelSizeToUnit
(53-58)frontend/javascripts/viewer/constants.ts (2)
Vector3
(14-14)Vector6
(17-17)frontend/javascripts/libs/toast.tsx (1)
warning
(197-200)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: build-smoketest-push
- GitHub Check: frontend-tests
- GitHub Check: backend-tests
🔇 Additional comments (4)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/annotations_csv_input.tsx (2)
33-38
: Good: safe extraction of last URL segmentSwitching from
as string
to a validatedlastSegment
avoids unsafe assertions. This prevents pushingundefined
IDs.
44-49
: Good: check finished annotations instead of all annotationsUsing
finishedAnnotations.length
aligns the condition with what you push, removing ambiguity.frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/train_ai_model_form.tsx (2)
88-102
: Label text LGTM“Max Length of Objects” fixes the earlier typo. Nicely done.
145-164
: Good: getMagInfo usage aligns with established practiceUsing
getMagInfo(colorLayer.resolutions)
is consistent with past learnings and avoids mismatches with.mags
.
const upscaledColor = bbox.color.map((colorPart) => colorPart * 255) as any as Vector3; | ||
const colorAsHexString = rgbToHex(upscaledColor); | ||
const volumeInVx = new BoundingBox(bbox.boundingBox).getVolume(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Defensively normalize bbox color; avoid double-scaling to 0–255
bbox.color
is scaled by 255 unconditionally, which breaks when it already contains 0–255 values (produces 65025+). Normalize/clamp before converting to hex.
Apply this diff:
- const upscaledColor = bbox.color.map((colorPart) => colorPart * 255) as any as Vector3;
- const colorAsHexString = rgbToHex(upscaledColor);
+ const clamp255 = (v: number) => Math.max(0, Math.min(255, Math.round(v)));
+ const isNormalized = bbox.color.every((c) => c <= 1);
+ const color0to255 = (isNormalized ? bbox.color.map((c) => c * 255) : bbox.color).map(
+ clamp255,
+ ) as Vector3;
+ const colorAsHexString = rgbToHex(color0to255);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const upscaledColor = bbox.color.map((colorPart) => colorPart * 255) as any as Vector3; | |
const colorAsHexString = rgbToHex(upscaledColor); | |
const volumeInVx = new BoundingBox(bbox.boundingBox).getVolume(); | |
const clamp255 = (v: number) => Math.max(0, Math.min(255, Math.round(v))); | |
const isNormalized = bbox.color.every((c) => c <= 1); | |
const color0to255 = (isNormalized ? bbox.color.map((c) => c * 255) : bbox.color).map( | |
clamp255, | |
) as Vector3; | |
const colorAsHexString = rgbToHex(color0to255); | |
const volumeInVx = new BoundingBox(bbox.boundingBox).getVolume(); |
🤖 Prompt for AI Agents
In
frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/bounding_box_selection.tsx
around lines 19–21, bbox.color is being multiplied by 255 unconditionally which
double-scales colors already in 0–255; instead detect and normalize the range
then clamp: check whether any component is >1 (assume 0–255) or <=1 (0–1), if in
0–1 multiply components by 255, then clamp each component to 0–255 and round to
integers before passing to rgbToHex; ensure the final value is typed as Vector3
(or numeric tuple) and replace the current unconditional map with this
normalization logic.
@@ -0,0 +1,429 @@ | |||
import { APIAiModelCategory, runInstanceModelTraining, runNeuronTraining } from "admin/rest_api"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify import source for training API functions
runNeuronTraining
and runInstanceModelTraining
are commonly exported from admin/api/jobs
, not admin/rest_api
. If not re-exported, this import will break.
Run this script to confirm export locations and references:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Search for function definitions/exports
rg -nP -C2 'export function (runNeuronTraining|runInstanceModelTraining)\b'
# Search for re-exports in admin/rest_api
rg -nP -C3 'export .*runNeuronTraining|runInstanceModelTraining' frontend/javascripts/admin/rest_api.ts
Length of output: 1009
Import training functions from admin/api/jobs (will break if unchanged)
rg shows both functions are exported from frontend/javascripts/admin/api/jobs.ts and frontend/javascripts/admin/rest_api.ts does not re-export them — the current import will fail.
- Change needed in: frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/train_ai_model_form.tsx (line 1)
- Actual exports found in: frontend/javascripts/admin/api/jobs.ts (export function runNeuronTraining — ~line 375; runInstanceModelTraining — ~line 391)
- frontend/javascripts/admin/rest_api.ts: no re-export found for these functions
Replace the import, e.g.:
- import { APIAiModelCategory, runInstanceModelTraining, runNeuronTraining } from "admin/rest_api";
- import { APIAiModelCategory } from "admin/rest_api";
- import { runInstanceModelTraining, runNeuronTraining } from "admin/api/jobs";
🤖 Prompt for AI Agents
In
frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/train_ai_model_form.tsx
around line 1, the file imports runInstanceModelTraining and runNeuronTraining
from "admin/rest_api" but those functions are exported from
"frontend/javascripts/admin/api/jobs.ts" instead; update the imports so
APIAiModelCategory stays imported from "admin/rest_api" and import
runInstanceModelTraining and runNeuronTraining from "admin/api/jobs" to match
actual exports and avoid runtime import errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for doing all this refactoring and taking on the issue which was originally assigned to me 🙏
I did not take a very close look at the file where you moved components only. I only spotted typos and a missing occurence of the new job type.
Moreover, I noticed https://github.com/scalableminds/voxelytics/issues/4261 which is likely unrealated
) { | ||
const numberOfTrainingAnnotations = job.trainingAnnotations?.length || 0; | ||
const modelName = | ||
job.type === APIJobType.TRAIN_NEURON_MODEL ? "neuron model" : "instance model"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure, but wasn't the DEPRECATED_TRAIN_MODEL
also a "neuron model" and thus the logic needs to be changed here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are likely right. Will fix.
const renderActionsForModel = (model: AiModel, onChangeSharedOrganizations: () => void) => { | ||
const organizationSharingButton = model.isOwnedByUsersOrganization ? ( | ||
<Button type="link" onClick={onChangeSharedOrganizations} icon={<TeamOutlined />}> | ||
<a onClick={onChangeSharedOrganizations}> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did you change this from button type link to an actual link? Shouldn't it have looks the same?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
INFER_NEURONS = "infer_neurons", | ||
MATERIALIZE_VOLUME_ANNOTATION = "materialize_volume_annotation", | ||
TRAIN_NEURON_MODEL = "train_neuron_model", | ||
TRAIN_INSTANCE_MODEL = "train_instance_model", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't there also be types for runInstanceModelInferenceWithAiModelJob
and so? The Backend side at least has infer_instances
as part of JobCommand in JobCommand.scala
. MAybe this also needs to be taken into account in ai_model_list_view.tsx
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, these types exist and were added in this PR. See jobs.ts
.
type RunInstanceModelInferenceParameters = BaseModelInferenceParameters & { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But this enum is still missing the newly introduced type infer_instances
. I meant:
TRAIN_INSTANCE_MODEL = "train_instance_model", | |
INFER_MITOCHONDRIA = "infer_mitochondria", | |
INFER_INSTANCES = "infer_instances", |
And above I got the wrong file. I did not mean but instead job_list_view.tsx
. There in renderDescription
the new type infer_instances
also needs a rendering entry, doesn't it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/neuron_segmentation_form.tsx
Outdated
Show resolved
Hide resolved
...nd/javascripts/viewer/view/action-bar/ai_job_modals/forms/mitochondria_segmentation_form.tsx
Outdated
Show resolved
Hide resolved
frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/train_ai_model_form.tsx
Outdated
Show resolved
Hide resolved
…s/neuron_segmentation_form.tsx Co-authored-by: MichaelBuessemeyer <[email protected]>
…s/mitochondria_segmentation_form.tsx Co-authored-by: MichaelBuessemeyer <[email protected]>
…s/train_ai_model_form.tsx Co-authored-by: MichaelBuessemeyer <[email protected]>
… into split_ai_modals
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/train_ai_model_form.tsx (1)
1-1
: Import training functions from the correct module (will break at runtime)
runNeuronTraining
andrunInstanceModelTraining
are exported from admin/api/jobs, not admin/rest_api. This import will fail.Apply this diff:
-import { APIAiModelCategory, runInstanceModelTraining, runNeuronTraining } from "admin/rest_api"; +import { APIAiModelCategory } from "admin/rest_api"; +import { runInstanceModelTraining, runNeuronTraining } from "admin/api/jobs";
🧹 Nitpick comments (2)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/train_ai_model_form.tsx (2)
208-214
: Rename typo: commonJobArgmuments → commonJobArgumentsMinor but confusing typo in a widely used variable name.
- const commonJobArgmuments = { + const commonJobArguments = { trainingAnnotations: getTrainingAnnotations(values), name: values.modelName, workflowYaml: useCustomWorkflow ? values.workflowYaml : undefined, comment: values.comment, }; @@ - maxDistanceNm: values.maxDistanceNm, - ...commonJobArgmuments, + maxDistanceNm: values.maxDistanceNm, + ...commonJobArguments, }); } else { await runNeuronTraining({ aiModelCategory: APIAiModelCategory.EM_NEURONS, - ...commonJobArgmuments, + ...commonJobArguments, });Also applies to: 219-226
84-104
: Rename component: AiInferenceOptionsFormItems → AiTrainingOptionsFormItemsThis renders a training parameter (
maxDistanceNm
), not inference options. Rename for clarity.-const AiInferenceOptionsFormItems = ({ +const AiTrainingOptionsFormItems = ({ selectedModelCategory, }: { selectedModelCategory: APIAiModelCategory }) => { @@ - <Form.Item + <Form.Item hasFeedback name={["maxDistanceNm"]} label={<div style={{ minHeight: 24 }}>Max Length of Objects</div>}- <AiInferenceOptionsFormItems selectedModelCategory={selectedModelCategory} /> + <AiTrainingOptionsFormItems selectedModelCategory={selectedModelCategory} />Also applies to: 368-369
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (5)
frontend/javascripts/admin/job/job_list_view.tsx
(2 hunks)frontend/javascripts/types/api_types.ts
(3 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/mitochondria_segmentation_form.tsx
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/neuron_segmentation_form.tsx
(1 hunks)frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/train_ai_model_form.tsx
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/mitochondria_segmentation_form.tsx
- frontend/javascripts/admin/job/job_list_view.tsx
- frontend/javascripts/types/api_types.ts
- frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/neuron_segmentation_form.tsx
🧰 Additional context used
🧬 Code Graph Analysis (1)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/train_ai_model_form.tsx (8)
frontend/javascripts/viewer/constants.ts (1)
Vector3
(14-14)frontend/javascripts/types/api_types.ts (3)
APIDataLayer
(114-114)APIAnnotation
(589-591)APIDataset
(243-246)frontend/javascripts/viewer/view/action-bar/ai_job_modals/utils.tsx (3)
AnnotationInfoForAITrainingJob
(123-129)checkAnnotationsForErrorsAndWarnings
(131-160)checkBoundingBoxesForErrorsAndWarnings
(163-287)frontend/javascripts/admin/api/jobs.ts (2)
runInstanceModelTraining
(391-396)runNeuronTraining
(375-380)frontend/javascripts/viewer/view/action-bar/ai_job_modals/components/annotations_csv_input.tsx (1)
AnnotationsCsvInput
(20-160)frontend/javascripts/components/layer_selection.tsx (2)
LayerSelection
(14-60)LayerSelectionFormItem
(62-94)frontend/javascripts/components/mag_selection.tsx (1)
MagSelectionFormItem
(7-28)frontend/javascripts/libs/format_utils.ts (1)
formatVoxels
(515-538)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: build-smoketest-push
- GitHub Check: backend-tests
- GitHub Check: frontend-tests
🔇 Additional comments (2)
frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/train_ai_model_form.tsx (2)
100-100
: Verify InputNumber “suffix” support in your AntD versionAntD historically supports
addonAfter
orformatter/parser
on InputNumber;suffix
may not be available depending on your exact version. If unsupported, the unit won’t render (and TS may complain).If needed, switch to one of the following:
- addonAfter:
<InputNumber min={0.1} addonAfter="nm" placeholder="Enter max distance in nm" />
- formatter/parser:
<InputNumber min={0.1} placeholder="Enter max distance in nm" formatter={(v) => (v == null ? "" : `${v} nm`)} parser={(v) => { const n = parseFloat(String(v).replace(/\s*nm\s*$/i, "")); return Number.isFinite(n) ? n : 0; }} />
93-93
: Typo fix acknowledged“Max Length of Objects” label looks corrected. Thanks for addressing the earlier typo.
const watcherFunctionRef = useRef(() => { | ||
return [new MagInfo([])]; | ||
}); | ||
watcherFunctionRef.current = () => { | ||
const getIntersectingMags = (idx: number, annotationId: string, dataset: APIDataset) => { | ||
const segmentationLayerName = form.getFieldValue(["trainingAnnotations", idx, "layerName"]); | ||
const imageDataLayerName = form.getFieldValue(["trainingAnnotations", idx, "imageDataLayer"]); | ||
if (segmentationLayerName != null && imageDataLayerName != null) { | ||
return new MagInfo( | ||
getIntersectingMagList(annotationId, dataset, segmentationLayerName, imageDataLayerName), | ||
); | ||
} | ||
return new MagInfo([]); | ||
}; | ||
|
||
return annotationInfos.map((annotationInfo, idx: number) => { | ||
const annotation = annotationInfo.annotation; | ||
const annotationId = "id" in annotation ? annotation.id : annotation.annotationId; | ||
return getIntersectingMags(idx, annotationId, annotationInfo.dataset); | ||
}); | ||
}; | ||
|
||
const magInfoForLayer: Array<MagInfo> = Form.useWatch(() => { | ||
return watcherFunctionRef.current(); | ||
}, form); | ||
const trainingAnnotationsInfo = Form.useWatch( | ||
"trainingAnnotations", | ||
form, | ||
) as TrainingAnnotation[]; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix broken Form.useWatch usage; compute mag info from trainingAnnotations via useMemo
Form.useWatch
doesn’t take a function as the first argument. As written, magInfoForLayer
will never update correctly, leaving the mag selector empty and blocking form submission.
- Watch
trainingAnnotations
directly and derivemagInfoForLayer
withuseMemo
. - Import
useMemo
from React.
Apply these diffs:
-import { useCallback, useRef, useState } from "react";
+import { useCallback, useRef, useState, useMemo } from "react";
- const watcherFunctionRef = useRef(() => {
- return [new MagInfo([])];
- });
- watcherFunctionRef.current = () => {
- const getIntersectingMags = (idx: number, annotationId: string, dataset: APIDataset) => {
- const segmentationLayerName = form.getFieldValue(["trainingAnnotations", idx, "layerName"]);
- const imageDataLayerName = form.getFieldValue(["trainingAnnotations", idx, "imageDataLayer"]);
- if (segmentationLayerName != null && imageDataLayerName != null) {
- return new MagInfo(
- getIntersectingMagList(annotationId, dataset, segmentationLayerName, imageDataLayerName),
- );
- }
- return new MagInfo([]);
- };
-
- return annotationInfos.map((annotationInfo, idx: number) => {
- const annotation = annotationInfo.annotation;
- const annotationId = "id" in annotation ? annotation.id : annotation.annotationId;
- return getIntersectingMags(idx, annotationId, annotationInfo.dataset);
- });
- };
-
- const magInfoForLayer: Array<MagInfo> = Form.useWatch(() => {
- return watcherFunctionRef.current();
- }, form);
- const trainingAnnotationsInfo = Form.useWatch(
- "trainingAnnotations",
- form,
- ) as TrainingAnnotation[];
+ const trainingAnnotationsInfo =
+ Form.useWatch("trainingAnnotations", form) as TrainingAnnotation[] | undefined;
+
+ const magInfoForLayer: Array<MagInfo> = useMemo(() => {
+ return annotationInfos.map((annotationInfo, idx: number) => {
+ const annotation = annotationInfo.annotation;
+ const annotationId = "id" in annotation ? annotation.id : annotation.annotationId;
+ const segmentationLayerName = trainingAnnotationsInfo?.[idx]?.layerName;
+ const imageDataLayerName = trainingAnnotationsInfo?.[idx]?.imageDataLayer;
+ if (segmentationLayerName != null && imageDataLayerName != null) {
+ return new MagInfo(
+ getIntersectingMagList(
+ annotationId,
+ annotationInfo.dataset,
+ segmentationLayerName,
+ imageDataLayerName,
+ ) ?? [],
+ );
+ }
+ return new MagInfo([]);
+ });
+ }, [annotationInfos, trainingAnnotationsInfo, getIntersectingMagList]);
Also applies to: 9-9
🤖 Prompt for AI Agents
frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/train_ai_model_form.tsx
lines 166-195: Form.useWatch is used incorrectly with a function and therefore
magInfoForLayer never updates; replace this pattern by watching
"trainingAnnotations" directly and deriving magInfoForLayer with useMemo.
Specifically, import useMemo from React, remove the watcherFunctionRef and its
assignment, call Form.useWatch("trainingAnnotations", form) to get the current
trainingAnnotations (you already do this below), then create magInfoForLayer =
useMemo(() => { compute per-annotation MagInfo by reading
form.getFieldValue(["trainingAnnotations", idx, "layerName"]) and
["trainingAnnotations", idx, "imageDataLayer"] and using
getIntersectingMagList(...) }, [trainingAnnotations, form]); ensure
magInfoForLayer is an Array<MagInfo> and that the useMemo dependencies include
trainingAnnotations (and form if needed), so the mag selector updates correctly.
.filter( | ||
(tracingId) => | ||
!annotation.annotationLayers.find( | ||
(annotationLayer) => annotationLayer.tracingId === tracingId, | ||
), | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix comparison bug: comparing layer name to tracingId
You map dataset segmentation layers to their names but then compare them to annotationLayer.tracingId
. This will never match, so layers present in the annotation aren’t filtered out correctly.
Apply this diff:
- .filter(
- (tracingId) =>
- !annotation.annotationLayers.find(
- (annotationLayer) => annotationLayer.tracingId === tracingId,
- ),
- );
+ .filter(
+ (layerName) =>
+ !annotation.annotationLayers.find(
+ (annotationLayer) => annotationLayer.name === layerName,
+ ),
+ );
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
.filter( | |
(tracingId) => | |
!annotation.annotationLayers.find( | |
(annotationLayer) => annotationLayer.tracingId === tracingId, | |
), | |
); | |
.filter( | |
(layerName) => | |
!annotation.annotationLayers.find( | |
(annotationLayer) => annotationLayer.name === layerName, | |
), | |
); |
🤖 Prompt for AI Agents
In
frontend/javascripts/viewer/view/action-bar/ai_job_modals/forms/train_ai_model_form.tsx
around lines 284-289, the code maps dataset segmentation layers to their names
but then compares those names to annotationLayer.tracingId, which can never
match; change the mapping to produce the tracingId for each dataset segmentation
layer (or else compare annotationLayer.name to the mapped names) so the filter
actually excludes layers already present on the annotation; update the map(...)
to return layer.tracingId (or adjust the comparison target) and ensure types
still align.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
frontend/javascripts/admin/job/job_list_view.tsx (1)
390-396
: Missing actions for new instance types (no View link; no post-train hint)Two functional omissions in the Action column:
- INFER_INSTANCES isn’t included in the “View Segmentation” block, so completed instance inference jobs won’t show a View link.
- TRAIN_INSTANCE_MODEL isn’t included in the training success hint, so users won’t see the guidance after instance-model training succeeds.
Both break UX for the newly added job types.
Apply this diff:
@@ - } else if ( - job.type === APIJobType.INFER_NUCLEI || - job.type === APIJobType.INFER_NEURONS || - job.type === APIJobType.MATERIALIZE_VOLUME_ANNOTATION || - job.type === APIJobType.COMPUTE_MESH_FILE || - job.type === APIJobType.DEPRECATED_INFER_WITH_MODEL || - job.type === APIJobType.INFER_MITOCHONDRIA - ) { + } else if ( + job.type === APIJobType.INFER_NUCLEI || + job.type === APIJobType.INFER_NEURONS || + job.type === APIJobType.INFER_INSTANCES || + job.type === APIJobType.MATERIALIZE_VOLUME_ANNOTATION || + job.type === APIJobType.COMPUTE_MESH_FILE || + job.type === APIJobType.DEPRECATED_INFER_WITH_MODEL || + job.type === APIJobType.INFER_MITOCHONDRIA + ) { @@ - } else if ( - job.type === APIJobType.TRAIN_NEURON_MODEL || - job.type === APIJobType.DEPRECATED_TRAIN_MODEL - ) { + } else if ( + job.type === APIJobType.TRAIN_NEURON_MODEL || + job.type === APIJobType.TRAIN_INSTANCE_MODEL || + job.type === APIJobType.DEPRECATED_TRAIN_MODEL + ) {Also applies to: 408-410
🧹 Nitpick comments (4)
frontend/javascripts/admin/job/job_list_view.tsx (4)
238-241
: Accessibility nit: prefer over for emphasisUse semantic (or CSS classes) rather than for better accessibility and consistency.
Example diff across the affected lines:
- AI Neuron inference for layer <i>{job.layerName}</i> of{" "} + AI Neuron inference for layer <em>{job.layerName}</em> of{" "}- AI Mitochondria inference for layer <i>{job.layerName}</i> of{" "} + AI Mitochondria inference for layer <em>{job.layerName}</em> of{" "}- AI instance segmentation for layer <i>{job.layerName}</i> of{" "} + AI instance segmentation for layer <em>{job.layerName}</em> of{" "}- Align sections for layer <i>{job.layerName}</i> of{" "} + Align sections for layer <em>{job.layerName}</em> of{" "}Also applies to: 259-262, 263-269, 273-275
263-269
: Optional: reflect custom-model runs in the description for instancesFor parity with the neurons flow, consider including “with custom model” when job.modelId is present. This improves clarity in the job list.
- } else if (job.type === APIJobType.INFER_INSTANCES && linkToDataset != null && job.layerName) { - return ( - <span> - AI instance segmentation for layer <em>{job.layerName}</em> of{" "} - <Link to={linkToDataset}>{job.datasetName}</Link>{" "} - </span> - ); + } else if (job.type === APIJobType.INFER_INSTANCES && linkToDataset != null && job.layerName) { + const isCustom = job.modelId != null; + return ( + <span> + {isCustom ? ( + <> + Run AI instance segmentation with custom model on{" "} + <Link to={linkToDataset}>{job.datasetName}</Link> + </> + ) : ( + <> + AI instance segmentation for layer <em>{job.layerName}</em> of{" "} + <Link to={linkToDataset}>{job.datasetName}</Link>{" "} + </> + )} + </span> + ); }
287-297
: Optional copy tweak to reduce naming confusionTo align with the PR’s goal of clarifying instance segmentation vs. neuron models, consider using “instance segmentation model” instead of “instance model”.
- const modelName = - job.type === APIJobType.TRAIN_NEURON_MODEL || job.type === APIJobType.DEPRECATED_TRAIN_MODEL - ? "neuron model" - : "instance model"; + const modelName = + job.type === APIJobType.TRAIN_NEURON_MODEL || job.type === APIJobType.DEPRECATED_TRAIN_MODEL + ? "neuron model" + : "instance segmentation model";
243-251
: Follow-up: consider separate copy for custom instance modelsYou treat custom neuron inferences with distinct copy (“Run AI segmentation with custom model …”). Consider a parallel branch for custom instance models for consistency. If you adopt the conditional in the INFER_INSTANCES block (see earlier comment), this may be unnecessary.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
frontend/javascripts/admin/job/job_list_view.tsx
(4 hunks)frontend/javascripts/types/api_types.ts
(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- frontend/javascripts/types/api_types.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: frontend-tests
- GitHub Check: backend-tests
- GitHub Check: build-smoketest-push
🔇 Additional comments (3)
frontend/javascripts/admin/job/job_list_view.tsx (3)
129-129
: LGTM: harmless wrapper for iconWrapping the icon in a span is fine and doesn’t affect semantics; spacing remains controlled by the icon’s className.
287-297
: Good fix: explicit comparison and deprecated mappingNice correction: the branch no longer evaluates truthy due to a bare enum and correctly maps DEPRECATED_TRAIN_MODEL to “neuron model”.
287-297
: Please manually verify bare enum usage and INFER_INSTANCES coverageTo minimize regressions, ensure that:
- No conditional branch uses a bare enum constant (e.g.
|| APIJobType.X
) without an explicit comparison.- The
INFER_INSTANCES
job type is rendered in all relevant UI components (action buttons, modals, list views, etc.).You can spot-check with:
rg '\|\|\s*APIJobType\.[A-Z0-9_]+' -C2 frontend/javascripts rg 'INFER_INSTANCES' -C2 frontend/javascripts
AI Neuron inferral for layer <i>{job.layerName}</i> of{" "} | ||
<Link to={linkToDataset}>{job.datasetName}</Link>{" "} | ||
</span> | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
User-visible wording: fix “inferral” and grammar
These UI strings are shown to end-users. “Inferral” is non-standard; “inference” is the correct term. Also, “AI instances segmentation” should be “AI instance segmentation.”
Apply this diff to correct the wording:
- AI Neuron inferral for layer <i>{job.layerName}</i> of{" "}
+ AI Neuron inference for layer <i>{job.layerName}</i> of{" "}
- AI Mitochondria inferral for layer <i>{job.layerName}</i> of{" "}
+ AI Mitochondria inference for layer <i>{job.layerName}</i> of{" "}
- AI instances segmentation for layer <i>{job.layerName}</i> of{" "}
+ AI instance segmentation for layer <i>{job.layerName}</i> of{" "}
Also applies to: 259-262, 263-269
🤖 Prompt for AI Agents
In frontend/javascripts/admin/job/job_list_view.tsx around lines 238-241 (and
also update the similar strings at 259-262 and 263-269), replace user-facing
words: change “inferral” to “inference” and fix plural/grammar “AI instances
segmentation” to “AI instance segmentation”; update the JSX/text nodes
accordingly so they read e.g. “AI Neuron inference for layer
<i>{job.layerName}</i> of <Link to={linkToDataset}>{job.datasetName}</Link>” and
other occurrences to use “AI instance segmentation.”
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
frontend/javascripts/admin/job/job_list_view.tsx (1)
390-396
: Add INFER_INSTANCES to the “View Segmentation” branchThe else-if at frontend/javascripts/admin/job/job_list_view.tsx (around lines 390–396) that renders the “View Segmentation” action omits APIJobType.INFER_INSTANCES, so instance inference jobs with a valid resultLink won’t show any view action.
Apply this patch:
} else if ( job.type === APIJobType.INFER_NUCLEI || job.type === APIJobType.INFER_NEURONS || job.type === APIJobType.MATERIALIZE_VOLUME_ANNOTATION || job.type === APIJobType.COMPUTE_MESH_FILE || job.type === APIJobType.DEPRECATED_INFER_WITH_MODEL || + job.type === APIJobType.INFER_INSTANCES || job.type === APIJobType.INFER_MITOCHONDRIA ) {
♻️ Duplicate comments (1)
frontend/javascripts/admin/job/job_list_view.tsx (1)
287-297
: Fixed always-true condition and correct naming for deprecated typeGood catch: explicitly comparing job.type === APIJobType.DEPRECATED_TRAIN_MODEL removes the always-true branch. Treating the deprecated variant as a neuron model aligns with prior discussion.
🧹 Nitpick comments (1)
frontend/javascripts/admin/job/job_list_view.tsx (1)
408-416
: Add completion hint for TRAIN_INSTANCE_MODELAfter successful training, users should see the same “may now be selected…” hint for instance models as for neuron models.
Apply this diff:
- } else if ( - job.type === APIJobType.TRAIN_NEURON_MODEL || - job.type === APIJobType.DEPRECATED_TRAIN_MODEL - ) { + } else if ( + job.type === APIJobType.TRAIN_NEURON_MODEL || + job.type === APIJobType.DEPRECATED_TRAIN_MODEL || + job.type === APIJobType.TRAIN_INSTANCE_MODEL + ) {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
frontend/javascripts/admin/job/job_list_view.tsx
(4 hunks)frontend/javascripts/types/api_types.ts
(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- frontend/javascripts/types/api_types.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: backend-tests
- GitHub Check: build-smoketest-push
🔇 Additional comments (4)
frontend/javascripts/admin/job/job_list_view.tsx (4)
129-131
: Icon wrapper simplification LGTMWrapping the icon in a plain span is fine since each icon already carries the icon-margin-right class. No spacing regressions expected.
238-241
: Clearer neuron inference description LGTMGood differentiation between built-in vs. custom model inference using modelId and improved copy with italics for the layer name.
259-262
: Mitochondria inference description LGTMConsistent phrasing and styling with the neuron case.
273-275
: Alignment description LGTMMinor copy tweak with italics reads well and is consistent with other entries.
{`Train ${modelName} on ${numberOfTrainingAnnotations} ${Utils.pluralize("annotation", numberOfTrainingAnnotations)}. `} | ||
{getShowTrainingDataLink(job.trainingAnnotations)} | ||
</span> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Guard “Show Training Data” link against empty arrays
If trainingAnnotations is an empty array, getShowTrainingDataLink will access index 0 and throw. Gate the link on count > 0.
Apply this diff:
- {getShowTrainingDataLink(job.trainingAnnotations)}
+ {numberOfTrainingAnnotations > 0 &&
+ getShowTrainingDataLink(job.trainingAnnotations)}
Alternatively, harden getShowTrainingDataLink itself to return null when length === 0:
export const getShowTrainingDataLink = (trainingAnnotations?: { annotationId: string }[]) => {
if (!trainingAnnotations || trainingAnnotations.length === 0) return null;
return trainingAnnotations.length > 1
? /* multi-annotation modal link */
: /* single annotation link */;
};
🤖 Prompt for AI Agents
In frontend/javascripts/admin/job/job_list_view.tsx around lines 299 to 301, the
call to getShowTrainingDataLink(job.trainingAnnotations) can throw when
trainingAnnotations is an empty array because the helper accesses index 0; guard
the link render by only calling getShowTrainingDataLink when
job.trainingAnnotations exists and has length > 0 (e.g. wrap with a conditional
check for length > 0), or alternatively modify getShowTrainingDataLink to return
null when trainingAnnotations is falsy or length === 0 so callers can safely
render its result.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
This PR does three things:
There is also a lot of naming confusion between our pre-trained models, the custom model that one can train, generic instance segmentation and EM_nuclei models (essentially the same) etc. Something for a follow up PR.
max_distance_nm
seed_generator_distance_threshold
starts_jobs_modal.tsx
into separate files for components, tabs, forms, hooks etc.URL of deployed dev instance (used for testing):
Steps to test:
Training a new Instances Model:
Using an instances model:
Issues:
(Please delete unneeded items, merge only when none are left open)
$PR_NUMBER.md
file inunreleased_changes
or use./tools/create-changelog-entry.py
)