Utils


source

store_variables

 store_variables (pkl_fn:str|pathlib.Path, size:list, apply_reorder:bool,
                  target_spacing:int|list)

Save variable values in a pickle file.


source

load_variables

 load_variables (pkl_fn:(<class'str'>,<class'pathlib.Path'>))

Loads stored variable values from a pickle file.

Args: pkl_fn: File path of the pickle file to be loaded.

Returns: The deserialized value of the pickled data.

Patch-based inference settings


source

store_patch_variables

 store_patch_variables (pkl_fn:str|pathlib.Path, patch_size:list,
                        patch_overlap:int|float|list,
                        aggregation_mode:str, apply_reorder:bool=False,
                        target_spacing:list=None,
                        sampler_type:str='uniform',
                        label_probabilities:dict=None,
                        samples_per_volume:int=8, queue_length:int=300,
                        queue_num_workers:int=4,
                        keep_largest_component:bool=False)

Save patch-based training and inference configuration to a pickle file.

Args: pkl_fn: Path to save the pickle file. patch_size: Size of patches [x, y, z]. patch_overlap: Overlap for inference (int, float 0-1, or list). aggregation_mode: GridAggregator mode (‘crop’, ‘average’, ‘hann’). apply_reorder: Whether to reorder to canonical (RAS+) orientation. target_spacing: Target voxel spacing [x, y, z]. sampler_type: Type of sampler used during training. label_probabilities: Label probabilities for LabelSampler. samples_per_volume: Number of patches extracted per volume during training. queue_length: Maximum number of patches in queue buffer. queue_num_workers: Number of workers for parallel patch extraction. keep_largest_component: If True, keep only the largest connected component in binary segmentation predictions during inference.

Example: >>> store_patch_variables( … ‘patch_settings.pkl’, … patch_size=[96, 96, 96], … patch_overlap=0.5, … aggregation_mode=‘hann’, … apply_reorder=True, … target_spacing=[1.0, 1.0, 1.0], … samples_per_volume=16, … keep_largest_component=True … )


source

load_patch_variables

 load_patch_variables (pkl_fn:str|pathlib.Path)

Load patch-based training and inference configuration from a pickle file.

Args: pkl_fn: Path to the pickle file.

Returns: Dictionary with patch configuration including: - patch_size, patch_overlap, aggregation_mode - apply_reorder, target_spacing, sampler_type, label_probabilities - samples_per_volume, queue_length, queue_num_workers

Example: >>> config = load_patch_variables(‘patch_settings.pkl’) >>> from fastMONAI.vision_patch import PatchConfig >>> patch_config = PatchConfig(**config)


source

ModelTrackingCallback

 ModelTrackingCallback (model_name:str, loss_function:str,
                        item_tfms:list[typing.Any], size:list[int],
                        target_spacing:list[float], apply_reorder:bool,
                        experiment_name:str=None, run_name:str=None,
                        auto_start:bool=False, patch_config:dict=None,
                        extra_params:dict=None, extra_tags:dict=None)

A FastAI callback for comprehensive MLflow experiment tracking.

This callback automatically logs hyperparameters, metrics, model artifacts, and configuration to MLflow during training. If a SaveModelCallback is present, the best model checkpoint will also be logged as an artifact.

Supports auto-managed runs when created via create_mlflow_callback().


source

create_mlflow_callback

 create_mlflow_callback (learn, experiment_name:str=None,
                         run_name:str=None, auto_start:bool=True,
                         model_name:str=None, extra_params:dict=None,
                         extra_tags:dict=None)

Create MLflow tracking callback with auto-extracted configuration.

This factory function automatically extracts configuration from the Learner, eliminating the need to manually specify parameters like size, transforms, loss function, etc.

Auto-extracts from Learner: - Preprocessing: apply_reorder, target_spacing, size/patch_size - Transforms: item_tfms or pre_patch_tfms - Training: loss_func, model architecture

Args: learn: fastai Learner instance experiment_name: MLflow experiment name. If None, uses model name. run_name: MLflow run name. If None, auto-generates with timestamp. auto_start: If True, auto-starts/stops MLflow run in before_fit/after_fit. model_name: Override auto-extracted model name for registration. extra_params: Additional parameters to log (e.g., {‘dropout’: 0.5}). extra_tags: MLflow tags to set on the run.

Returns: ModelTrackingCallback ready to use with learn.fit()

Example: >>> # Instead of this (6 manual params): >>> # mlflow_callback = ModelTrackingCallback( >>> # model_name=f”{task}_{model._get_name()}“, >>> # loss_function=loss_func.loss_func._get_name(), >>> # item_tfms=item_tfms, >>> # size=size, >>> # target_spacing=target_spacing, >>> # apply_reorder=True, >>> # ) >>> # with mlflow.start_run(run_name=”training”): >>> # learn.fit_one_cycle(30, lr, cbs=[mlflow_callback]) >>> >>> # Do this (zero manual params): >>> callback = create_mlflow_callback(learn, experiment_name=“Task02_Heart”) >>> learn.fit_one_cycle(30, lr, cbs=[callback, save_best])

# Test auto-extraction helper functions
from fastcore.test import test_eq, test_fail
from dataclasses import dataclass

# Test _detect_patch_workflow
class MockStandardDls:
    bs = 4
    after_item = None
mock_std = MockStandardDls()
test_eq(_detect_patch_workflow(mock_std), False)

@dataclass
class MockPatchConfig:
    patch_size: list = None
    patch_overlap: float = 0.5
    samples_per_volume: int = 8
    sampler_type: str = 'uniform'
    label_probabilities: dict = None
    queue_length: int = 300
    aggregation_mode: str = 'hann'
    padding_mode: int = 0
    keep_largest_component: bool = False
    apply_reorder: bool = True
    target_spacing: list = None
    
    def __post_init__(self):
        if self.patch_size is None:
            self.patch_size = [96, 96, 96]

class MockPatchDls:
    bs = 4
    patch_config = MockPatchConfig()
mock_patch = MockPatchDls()
test_eq(_detect_patch_workflow(mock_patch), True)

# Test _extract_size_from_transforms with mock transform
class MockPadOrCrop:
    def __init__(self, target_shape):
        self.target_shape = target_shape

class MockTransform:
    def __init__(self, target_shape):
        self.pad_or_crop = MockPadOrCrop(target_shape)

tfms = [MockTransform([128, 128, 64])]
test_eq(_extract_size_from_transforms(tfms), [128, 128, 64])
test_eq(_extract_size_from_transforms(None), None)
test_eq(_extract_size_from_transforms([]), None)

print("All auto-extraction helper tests passed!")
All auto-extraction helper tests passed!

source

MLflowUIManager

 MLflowUIManager ()

Initialize self. See help(type(self)) for accurate signature.