Shortcuts

rising.transforms

Provides the Augmentations and Transforms used by the rising.loading.DataLoader.

Implementations include:

  • Transformation Base Classes

  • Composed Transforms

  • Affine Transforms

  • Channel Transforms

  • Cropping Transforms

  • Device Transforms

  • Format Transforms

  • Intensity Transforms

  • Kernel Transforms

  • Spatial Transforms

  • Tensor Transforms

  • Utility Transforms

Transformation Base Classes

class rising.transforms.abstract.AbstractTransform(grad=False, **kwargs)[source][source]

Bases: torch.nn.Module

Base class for all transforms

Parameters

grad (bool) – enable gradient computation inside transformation

__call__(*args, **kwargs)[source][source]

Call super class with correct torch context

Parameters
  • *args – forwarded positional arguments

  • **kwargs – forwarded keyword arguments

Returns

transformed data

Return type

Any

forward(**data)[source][source]

Implement transform functionality here

Parameters

**data – dict with data

Returns

dict with transformed data

Return type

dict

register_sampler(name, sampler, *args, **kwargs)[source][source]

Registers a parameter sampler to the transform. Internally a property is created to forward calls to the attribute to calls of the sampler.

Parameters
  • name (str) – the property name

  • sampler (Union[Sequence, AbstractParameter]) – the sampler. Will be wrapped to a sampler always returning the same element if not already a sampler

  • *args – additional positional arguments (will be forwarded to sampler call)

  • **kwargs – additional keyword arguments (will be forwarded to sampler call)

class rising.transforms.abstract.BaseTransform(augment_fn, *args, keys=('data', ), grad=False, property_names=(), **kwargs)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Transform to apply a functional interface to given keys

Warning

This transform should not be used with functions which have randomness build in because it will result in different augmentations per key.

Parameters
  • augment_fn (Callable[[Tensor], Any]) – function for augmentation

  • *args – positional arguments passed to augment_fn

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • property_names (Sequence[str]) – a tuple containing all the properties to call during forward pass

  • **kwargs – keyword arguments passed to augment_fn

forward(**data)[source][source]

Apply transformation

Parameters

data – dict with tensors

Returns

dict with augmented data

Return type

dict

class rising.transforms.abstract.PerSampleTransform(augment_fn, *args, keys=('data', ), grad=False, property_names=(), **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Apply transformation to each sample in batch individually augment_fn must be callable with option out where results are saved in.

Warning

This transform should not be used with functions which have randomness build in because it will result in different augmentations per sample and key.

Parameters
  • augment_fn (Callable[[Tensor], Any]) – function for augmentation

  • *args – positional arguments passed to augment_fn

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • property_names (Sequence[str]) – a tuple containing all the properties to call during forward pass

  • **kwargs – keyword arguments passed to augment_fn

forward(**data)[source][source]
Parameters

data – dict with tensors

Returns

dict with augmented data

Return type

dict

class rising.transforms.abstract.PerChannelTransform(augment_fn, per_channel=False, keys=('data', ), grad=False, property_names=(), **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Apply transformation per channel (but still to whole batch)

Warning

This transform should not be used with functions which have randomness build in because it will result in different augmentations per channel and key.

Parameters
  • augment_fn (Callable[[Tensor], Any]) – function for augmentation

  • per_channel (bool) – enable transformation per channel

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • kwargs – keyword arguments passed to augment_fn

forward(**data)[source][source]

Apply transformation

Parameters

data – dict with tensors

Returns

dict with augmented data

Return type

dict

class rising.transforms.abstract.BaseTransformSeeded(augment_fn, *args, keys=('data', ), grad=False, property_names=(), **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Transform to apply a functional interface to given keys and use the same pytorch(!) seed for every key.

Parameters
  • augment_fn (Callable[[Tensor], Any]) – function for augmentation

  • *args – positional arguments passed to augment_fn

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • property_names (Sequence[str]) – a tuple containing all the properties to call during forward pass

  • **kwargs – keyword arguments passed to augment_fn

forward(**data)[source][source]

Apply transformation and use same seed for every key

Parameters

data – dict with tensors

Returns

dict with augmented data

Return type

dict

AbstractTransform

class rising.transforms.abstract.AbstractTransform(grad=False, **kwargs)[source][source]

Bases: torch.nn.Module

Base class for all transforms

Parameters

grad (bool) – enable gradient computation inside transformation

__call__(*args, **kwargs)[source][source]

Call super class with correct torch context

Parameters
  • *args – forwarded positional arguments

  • **kwargs – forwarded keyword arguments

Returns

transformed data

Return type

Any

forward(**data)[source][source]

Implement transform functionality here

Parameters

**data – dict with data

Returns

dict with transformed data

Return type

dict

register_sampler(name, sampler, *args, **kwargs)[source][source]

Registers a parameter sampler to the transform. Internally a property is created to forward calls to the attribute to calls of the sampler.

Parameters
  • name (str) – the property name

  • sampler (Union[Sequence, AbstractParameter]) – the sampler. Will be wrapped to a sampler always returning the same element if not already a sampler

  • *args – additional positional arguments (will be forwarded to sampler call)

  • **kwargs – additional keyword arguments (will be forwarded to sampler call)

BaseTransform

class rising.transforms.abstract.BaseTransform(augment_fn, *args, keys=('data', ), grad=False, property_names=(), **kwargs)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Transform to apply a functional interface to given keys

Warning

This transform should not be used with functions which have randomness build in because it will result in different augmentations per key.

Parameters
  • augment_fn (Callable[[Tensor], Any]) – function for augmentation

  • *args – positional arguments passed to augment_fn

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • property_names (Sequence[str]) – a tuple containing all the properties to call during forward pass

  • **kwargs – keyword arguments passed to augment_fn

forward(**data)[source][source]

Apply transformation

Parameters

data – dict with tensors

Returns

dict with augmented data

Return type

dict

BaseTransformSeeded

class rising.transforms.abstract.BaseTransformSeeded(augment_fn, *args, keys=('data', ), grad=False, property_names=(), **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Transform to apply a functional interface to given keys and use the same pytorch(!) seed for every key.

Parameters
  • augment_fn (Callable[[Tensor], Any]) – function for augmentation

  • *args – positional arguments passed to augment_fn

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • property_names (Sequence[str]) – a tuple containing all the properties to call during forward pass

  • **kwargs – keyword arguments passed to augment_fn

forward(**data)[source][source]

Apply transformation and use same seed for every key

Parameters

data – dict with tensors

Returns

dict with augmented data

Return type

dict

PerSampleTransform

class rising.transforms.abstract.PerSampleTransform(augment_fn, *args, keys=('data', ), grad=False, property_names=(), **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Apply transformation to each sample in batch individually augment_fn must be callable with option out where results are saved in.

Warning

This transform should not be used with functions which have randomness build in because it will result in different augmentations per sample and key.

Parameters
  • augment_fn (Callable[[Tensor], Any]) – function for augmentation

  • *args – positional arguments passed to augment_fn

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • property_names (Sequence[str]) – a tuple containing all the properties to call during forward pass

  • **kwargs – keyword arguments passed to augment_fn

forward(**data)[source][source]
Parameters

data – dict with tensors

Returns

dict with augmented data

Return type

dict

PerChannelTransform

class rising.transforms.abstract.PerChannelTransform(augment_fn, per_channel=False, keys=('data', ), grad=False, property_names=(), **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Apply transformation per channel (but still to whole batch)

Warning

This transform should not be used with functions which have randomness build in because it will result in different augmentations per channel and key.

Parameters
  • augment_fn (Callable[[Tensor], Any]) – function for augmentation

  • per_channel (bool) – enable transformation per channel

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • kwargs – keyword arguments passed to augment_fn

forward(**data)[source][source]

Apply transformation

Parameters

data – dict with tensors

Returns

dict with augmented data

Return type

dict

Compose Transforms

class rising.transforms.compose.Compose(*transforms, shuffle=False, transform_call=<function dict_call>)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Compose multiple transforms

Parameters
  • transforms (Union[AbstractTransform, Sequence[AbstractTransform]]) – one or multiple transformations which are applied in consecutive order

  • shuffle (bool) – apply transforms in random order

  • transform_call (Callable[[Any, Callable], Any]) – function which determines how transforms are called. By default Mappings and Sequences are unpacked during the transform.

forward(*seq_like, **map_like)[source][source]

Apply transforms in a consecutive order. Can either handle Sequence like or Mapping like data.

Parameters
  • *seq_like – data which is unpacked like a Sequence

  • **map_like – data which is unpacked like a dict

Returns

transformed data

Return type

Union[Sequence, Mapping]

property shuffle[source]

Getter for attribute shuffle

Returns

True if shuffle is enabled, False otherwise

Return type

bool

property transforms[source]

Transforms getter

Returns

transforms to compose

Return type

torch.nn.ModuleList

class rising.transforms.compose.DropoutCompose(*transforms, dropout=0.5, shuffle=False, random_sampler=None, transform_call=<function dict_call>, **kwargs)[source][source]

Bases: rising.transforms.compose.Compose

Compose multiple transforms to one and randomly apply them

Parameters
  • *transforms – one or multiple transformations which are applied in consecutive order

  • dropout (Union[float, Sequence[float]]) – if provided as float, each transform is skipped with the given probability if dropout is a sequence, it needs to specify the dropout probability for each given transform

  • shuffle (bool) – apply transforms in random order

  • random_sampler (Optional[ContinuousParameter]) – a continuous parameter sampler. Samples a random value for each of the transforms.

  • transform_call (Callable[[Any, Callable], Any]) – function which determines how transforms are called. By default Mappings and Sequences are unpacked during the transform.

Raises

ValueError – if dropout is a sequence it must have the same length as transforms

forward(*seq_like, **map_like)[source][source]

Apply transforms in a consecutive order. Can either handle Sequence like or Mapping like data.

Parameters
  • *seq_like – data which is unpacked like a Sequence

  • **map_like – data which is unpacked like a dict

Returns

dict with transformed data

Return type

Union[Sequence, Mapping]

class rising.transforms.compose.OneOf(*transforms, weights=None, p=1.0, transform_call=<function dict_call>)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Apply one of the given transforms.

Parameters
  • *transforms – transforms to choose from

  • weights (Optional[Sequence[float]]) – additional weights for transforms

  • p (float) – probability that one transform i applied

  • transform_call (Callable[[Any, Callable], Any]) – function which determines how transforms are called. By default Mappings and Sequences are unpacked during the transform.

forward(**data)[source][source]

Implement transform functionality here

Parameters

**data – dict with data

Returns

dict with transformed data

Return type

dict

Compose

class rising.transforms.compose.Compose(*transforms, shuffle=False, transform_call=<function dict_call>)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Compose multiple transforms

Parameters
  • transforms (Union[AbstractTransform, Sequence[AbstractTransform]]) – one or multiple transformations which are applied in consecutive order

  • shuffle (bool) – apply transforms in random order

  • transform_call (Callable[[Any, Callable], Any]) – function which determines how transforms are called. By default Mappings and Sequences are unpacked during the transform.

forward(*seq_like, **map_like)[source][source]

Apply transforms in a consecutive order. Can either handle Sequence like or Mapping like data.

Parameters
  • *seq_like – data which is unpacked like a Sequence

  • **map_like – data which is unpacked like a dict

Returns

transformed data

Return type

Union[Sequence, Mapping]

property shuffle[source]

Getter for attribute shuffle

Returns

True if shuffle is enabled, False otherwise

Return type

bool

property transforms[source]

Transforms getter

Returns

transforms to compose

Return type

torch.nn.ModuleList

DropoutCompose

class rising.transforms.compose.DropoutCompose(*transforms, dropout=0.5, shuffle=False, random_sampler=None, transform_call=<function dict_call>, **kwargs)[source][source]

Bases: rising.transforms.compose.Compose

Compose multiple transforms to one and randomly apply them

Parameters
  • *transforms – one or multiple transformations which are applied in consecutive order

  • dropout (Union[float, Sequence[float]]) – if provided as float, each transform is skipped with the given probability if dropout is a sequence, it needs to specify the dropout probability for each given transform

  • shuffle (bool) – apply transforms in random order

  • random_sampler (Optional[ContinuousParameter]) – a continuous parameter sampler. Samples a random value for each of the transforms.

  • transform_call (Callable[[Any, Callable], Any]) – function which determines how transforms are called. By default Mappings and Sequences are unpacked during the transform.

Raises

ValueError – if dropout is a sequence it must have the same length as transforms

forward(*seq_like, **map_like)[source][source]

Apply transforms in a consecutive order. Can either handle Sequence like or Mapping like data.

Parameters
  • *seq_like – data which is unpacked like a Sequence

  • **map_like – data which is unpacked like a dict

Returns

dict with transformed data

Return type

Union[Sequence, Mapping]

OneOf

class rising.transforms.compose.OneOf(*transforms, weights=None, p=1.0, transform_call=<function dict_call>)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Apply one of the given transforms.

Parameters
  • *transforms – transforms to choose from

  • weights (Optional[Sequence[float]]) – additional weights for transforms

  • p (float) – probability that one transform i applied

  • transform_call (Callable[[Any, Callable], Any]) – function which determines how transforms are called. By default Mappings and Sequences are unpacked during the transform.

forward(**data)[source][source]

Implement transform functionality here

Parameters

**data – dict with data

Returns

dict with transformed data

Return type

dict

dict_call

rising.transforms.compose.dict_call(batch, transform)[source][source]

Unpacks the dict for every transformation

Parameters
  • batch (dict) – current batch which is passed to transform

  • transform (Callable) – transform to perform

Returns

transformed batch

Return type

Any

Affine Transforms

class rising.transforms.affine.Affine(matrix=None, keys=('data', ), grad=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, per_sample=True, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Class Performing an Affine Transformation on a given sample dict. The transformation will be applied to all the dict-entries specified in keys.

Parameters
  • matrix (Union[Tensor, Sequence[Sequence[float]], None]) – if given, overwrites the parameters for scale, :attr:rotation` and translation. Should be a matrix of shape [(BATCHSIZE,) NDIM, NDIM(+1)] This matrix represents the whole transformation matrix

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • output_size (Optional[tuple]) – if given, this will be the resulting image size. Defaults to None

  • adjust_size (bool) – if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.

  • interpolation_mode (str) – interpolation mode to calculate output values 'bilinear' | 'nearest'. Default: 'bilinear'

  • padding_mode (str) – padding mode for outside grid values 'zeros’ | 'border' | 'reflection'. Default: 'zeros'

  • align_corners (bool) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.

  • reverse_order (bool) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]

  • per_sample (bool) – sample different values for each element in the batch. The transform is still applied in a batched wise fashion.

  • **kwargs – additional keyword arguments passed to the affine transform

assemble_matrix(**data)[source][source]

Assembles the matrix (and takes care of batching and having it on the right device and in the correct dtype and dimensionality).

Parameters

**data – the data to be transformed. Will be used to determine batchsize, dimensionality, dtype and device

Returns

the (batched) transformation matrix

Return type

torch.Tensor

forward(**data)[source][source]

Assembles the matrix and applies it to the specified sample-entities.

Parameters

**data – the data to transform

Returns

dictionary containing the transformed data

Return type

dict

class rising.transforms.affine.BaseAffine(scale=None, rotation=None, translation=None, degree=False, image_transform=True, keys=('data', ), grad=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, per_sample=True, **kwargs)[source][source]

Bases: rising.transforms.affine.Affine

Class performing a basic Affine Transformation on a given sample dict. The transformation will be applied to all the dict-entries specified in keys.

Parameters
  • scale (Union[int, Sequence[int], float, Sequence[float], Tensor, AbstractParameter, Sequence[AbstractParameter], None]) – the scale factor(s). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a scaling factor of 1

  • rotation (Union[int, Sequence[int], float, Sequence[float], Tensor, AbstractParameter, Sequence[AbstractParameter], None]) – the rotation factor(s). The rotation is performed in consecutive order axis0 -> axis1 (-> axis 2). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a rotation angle of 0

  • translation (Union[int, Sequence[int], float, Sequence[float], Tensor, AbstractParameter, Sequence[AbstractParameter], None]) – torch.Tensor, int, float the translation offset(s) relative to image (should be in the range [0, 1]). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a translation offset of 0

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • degree (bool) – whether the given rotation(s) are in degrees. Only valid for rotation parameters, which aren’t passed as full transformation matrix.

  • output_size (Optional[tuple]) – if given, this will be the resulting image size. Defaults to None

  • adjust_size (bool) – if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.

  • interpolation_mode (str) – interpolation mode to calculate output values 'bilinear' | 'nearest'. Default: 'bilinear'

  • padding_mode (str) – padding mode for outside grid values 'zeros' | 'border' | 'reflection'. Default: 'zeros'

  • align_corners (bool) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.

  • reverse_order (bool) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]

  • per_sample (bool) – sample different values for each element in the batch. The transform is still applied in a batched wise fashion.

  • **kwargs – additional keyword arguments passed to the affine transform

assemble_matrix(**data)[source][source]

Assembles the matrix (and takes care of batching and having it on the right device and in the correct dtype and dimensionality).

Parameters

**data – the data to be transformed. Will be used to determine batchsize, dimensionality, dtype and device

Returns

the (batched) transformation matrix

Return type

torch.Tensor

sample_for_batch(name, batchsize)[source][source]

Sample elements for batch

Parameters
  • name (str) – name of parameter

  • batchsize (int) – batch size

Returns

sampled elements

Return type

Optional[Union[Any, Sequence[Any]]]

class rising.transforms.affine.StackedAffine(*transforms, keys=('data', ), grad=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, **kwargs)[source][source]

Bases: rising.transforms.affine.Affine

Class to stack multiple affines with dynamic ensembling by matrix multiplication to avoid multiple interpolations.

Parameters
  • transforms (Union[Affine, Sequence[Union[Sequence[Affine], Affine]]]) – the transforms to stack. Each transform must have a function called assemble_matrix, which is called to dynamically assemble stacked matrices. Afterwards these transformations are stacked by matrix-multiplication to only perform a single interpolation

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • output_size (Optional[tuple]) – if given, this will be the resulting image size. Defaults to None

  • adjust_size (bool) – if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.

  • interpolation_mode (str) – interpolation mode to calculate output values 'bilinear' | 'nearest'. Default: 'bilinear'

  • padding_mode (str) – padding mode for outside grid values 'zeros' | 'border' | 'reflection'. Default: 'zeros'

  • align_corners (bool) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.

  • reverse_order (bool) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]

  • **kwargs – additional keyword arguments passed to the affine transform

assemble_matrix(**data)[source][source]

Handles the matrix assembly and stacking

Parameters

**data – the data to be transformed. Will be used to determine batchsize, dimensionality, dtype and device

Returns

the (batched) transformation matrix

Return type

torch.Tensor

class rising.transforms.affine.Rotate(rotation, keys=('data', ), grad=False, degree=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, **kwargs)[source][source]

Bases: rising.transforms.affine.BaseAffine

Class Performing a Rotation-OnlyAffine Transformation on a given sample dict. The rotation is applied in consecutive order: rot axis 0 -> rot axis 1 -> rot axis 2 The transformation will be applied to all the dict-entries specified in keys.

Parameters
  • rotation (Union[int, Sequence[int], float, Sequence[float], Tensor, AbstractParameter, Sequence[AbstractParameter]]) – the rotation factor(s). The rotation is performed in consecutive order axis0 -> axis1 (-> axis 2). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a rotation angle of 0

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • degree (bool) – whether the given rotation(s) are in degrees. Only valid for rotation parameters, which aren’t passed as full transformation matrix.

  • output_size (Optional[tuple]) – if given, this will be the resulting image size. Defaults to None

  • adjust_size (bool) – if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.

  • interpolation_mode (str) – interpolation mode to calculate output values 'bilinear' | 'nearest'. Default: 'bilinear'

  • padding_mode (str) – padding mode for outside grid values 'zeros' | 'border' | 'reflection'. Default: 'zeros'

  • align_corners (bool) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.

  • reverse_order (bool) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]

  • **kwargs – additional keyword arguments passed to the affine transform

class rising.transforms.affine.Scale(scale, keys=('data', ), grad=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, **kwargs)[source][source]

Bases: rising.transforms.affine.BaseAffine

Class Performing a Scale-Only Affine Transformation on a given sample dict. The transformation will be applied to all the dict-entries specified in keys.

Parameters
  • scale (Union[int, Sequence[int], float, Sequence[float], Tensor, AbstractParameter, Sequence[AbstractParameter]]) – torch.Tensor, int, float, optional the scale factor(s). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a scaling factor of 1

  • keys (Sequence) – Sequence keys which should be augmented

  • grad (bool) – bool enable gradient computation inside transformation

  • degree – bool whether the given rotation(s) are in degrees. Only valid for rotation parameters, which aren’t passed as full transformation matrix.

  • output_size (Optional[tuple]) – Iterable if given, this will be the resulting image size. Defaults to None

  • adjust_size (bool) – bool if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.

  • interpolation_mode (str) – str interpolation mode to calculate output values ‘bilinear’ | ‘nearest’. Default: ‘bilinear’

  • padding_mode (str) – padding mode for outside grid values ‘zeros’ | ‘border’ | ‘reflection’. Default: ‘zeros’

  • align_corners (bool) – bool Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.

  • reverse_order (bool) – bool reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]

  • **kwargs – additional keyword arguments passed to the affine transform

class rising.transforms.affine.Translate(translation, keys=('data', ), grad=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, unit='pixel', reverse_order=False, **kwargs)[source][source]

Bases: rising.transforms.affine.BaseAffine

Class Performing an Translation-Only Affine Transformation on a given sample dict. The transformation will be applied to all the dict-entries specified in keys.

Parameters
  • translation (Union[int, Sequence[int], float, Sequence[float], Tensor, AbstractParameter, Sequence[AbstractParameter]]) – torch.Tensor, int, float the translation offset(s) relative to image (should be in the range [0, 1]). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a translation offset of 0

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • output_size (Optional[tuple]) – if given, this will be the resulting image size. Defaults to None

  • adjust_size (bool) – if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.

  • interpolation_mode (str) – interpolation mode to calculate output values 'bilinear' | 'nearest'. Default: 'bilinear'

  • padding_mode (str) – padding mode for outside grid values 'zeros' | 'border' | 'reflection'. Default: 'zeros'

  • align_corners (bool) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.

  • unit (str) – defines the unit of the translation. Either `relative' to the image size or in `pixel'

  • reverse_order (bool) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]

  • **kwargs – additional keyword arguments passed to the affine transform

assemble_matrix(**data)[source][source]

Assembles the matrix (and takes care of batching and having it on the right device and in the correct dtype and dimensionality).

Parameters

**data – the data to be transformed. Will be used to determine batchsize, dimensionality, dtype and device

Returns

the (batched) transformation matrix [N, NDIM, NDIM]

Return type

torch.Tensor

class rising.transforms.affine.Resize(size, keys=('data', ), grad=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, **kwargs)[source][source]

Bases: rising.transforms.affine.Scale

Class Performing a Resizing Affine Transformation on a given sample dict. The transformation will be applied to all the dict-entries specified in keys.

Parameters
  • size (Union[int, Tuple[int]]) – the target size. If int, this will be repeated for all the dimensions

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • interpolation_mode (str) – nterpolation mode to calculate output values ‘bilinear’ | ‘nearest’. Default: ‘bilinear’

  • padding_mode (str) – padding mode for outside grid values ‘zeros’ | ‘border’ | ‘reflection’. Default: ‘zeros’

  • align_corners (bool) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.

  • reverse_order (bool) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]

  • **kwargs – additional keyword arguments passed to the affine transform

Notes

The offsets for shifting back and to origin are calculated on the entry matching the first item iin keys for each batch

assemble_matrix(**data)[source][source]

Handles the matrix assembly and calculates the scale factors for resizing

Parameters

**data – the data to be transformed. Will be used to determine batchsize, dimensionality, dtype and device

Returns

the (batched) transformation matrix

Return type

torch.Tensor

Affine

class rising.transforms.affine.Affine(matrix=None, keys=('data', ), grad=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, per_sample=True, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Class Performing an Affine Transformation on a given sample dict. The transformation will be applied to all the dict-entries specified in keys.

Parameters
  • matrix (Union[Tensor, Sequence[Sequence[float]], None]) – if given, overwrites the parameters for scale, :attr:rotation` and translation. Should be a matrix of shape [(BATCHSIZE,) NDIM, NDIM(+1)] This matrix represents the whole transformation matrix

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • output_size (Optional[tuple]) – if given, this will be the resulting image size. Defaults to None

  • adjust_size (bool) – if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.

  • interpolation_mode (str) – interpolation mode to calculate output values 'bilinear' | 'nearest'. Default: 'bilinear'

  • padding_mode (str) – padding mode for outside grid values 'zeros’ | 'border' | 'reflection'. Default: 'zeros'

  • align_corners (bool) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.

  • reverse_order (bool) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]

  • per_sample (bool) – sample different values for each element in the batch. The transform is still applied in a batched wise fashion.

  • **kwargs – additional keyword arguments passed to the affine transform

assemble_matrix(**data)[source][source]

Assembles the matrix (and takes care of batching and having it on the right device and in the correct dtype and dimensionality).

Parameters

**data – the data to be transformed. Will be used to determine batchsize, dimensionality, dtype and device

Returns

the (batched) transformation matrix

Return type

torch.Tensor

forward(**data)[source][source]

Assembles the matrix and applies it to the specified sample-entities.

Parameters

**data – the data to transform

Returns

dictionary containing the transformed data

Return type

dict

StackedAffine

class rising.transforms.affine.StackedAffine(*transforms, keys=('data', ), grad=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, **kwargs)[source][source]

Bases: rising.transforms.affine.Affine

Class to stack multiple affines with dynamic ensembling by matrix multiplication to avoid multiple interpolations.

Parameters
  • transforms (Union[Affine, Sequence[Union[Sequence[Affine], Affine]]]) – the transforms to stack. Each transform must have a function called assemble_matrix, which is called to dynamically assemble stacked matrices. Afterwards these transformations are stacked by matrix-multiplication to only perform a single interpolation

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • output_size (Optional[tuple]) – if given, this will be the resulting image size. Defaults to None

  • adjust_size (bool) – if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.

  • interpolation_mode (str) – interpolation mode to calculate output values 'bilinear' | 'nearest'. Default: 'bilinear'

  • padding_mode (str) – padding mode for outside grid values 'zeros' | 'border' | 'reflection'. Default: 'zeros'

  • align_corners (bool) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.

  • reverse_order (bool) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]

  • **kwargs – additional keyword arguments passed to the affine transform

assemble_matrix(**data)[source][source]

Handles the matrix assembly and stacking

Parameters

**data – the data to be transformed. Will be used to determine batchsize, dimensionality, dtype and device

Returns

the (batched) transformation matrix

Return type

torch.Tensor

BaseAffine

class rising.transforms.affine.BaseAffine(scale=None, rotation=None, translation=None, degree=False, image_transform=True, keys=('data', ), grad=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, per_sample=True, **kwargs)[source][source]

Bases: rising.transforms.affine.Affine

Class performing a basic Affine Transformation on a given sample dict. The transformation will be applied to all the dict-entries specified in keys.

Parameters
  • scale (Union[int, Sequence[int], float, Sequence[float], Tensor, AbstractParameter, Sequence[AbstractParameter], None]) – the scale factor(s). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a scaling factor of 1

  • rotation (Union[int, Sequence[int], float, Sequence[float], Tensor, AbstractParameter, Sequence[AbstractParameter], None]) – the rotation factor(s). The rotation is performed in consecutive order axis0 -> axis1 (-> axis 2). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a rotation angle of 0

  • translation (Union[int, Sequence[int], float, Sequence[float], Tensor, AbstractParameter, Sequence[AbstractParameter], None]) – torch.Tensor, int, float the translation offset(s) relative to image (should be in the range [0, 1]). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a translation offset of 0

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • degree (bool) – whether the given rotation(s) are in degrees. Only valid for rotation parameters, which aren’t passed as full transformation matrix.

  • output_size (Optional[tuple]) – if given, this will be the resulting image size. Defaults to None

  • adjust_size (bool) – if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.

  • interpolation_mode (str) – interpolation mode to calculate output values 'bilinear' | 'nearest'. Default: 'bilinear'

  • padding_mode (str) – padding mode for outside grid values 'zeros' | 'border' | 'reflection'. Default: 'zeros'

  • align_corners (bool) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.

  • reverse_order (bool) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]

  • per_sample (bool) – sample different values for each element in the batch. The transform is still applied in a batched wise fashion.

  • **kwargs – additional keyword arguments passed to the affine transform

assemble_matrix(**data)[source][source]

Assembles the matrix (and takes care of batching and having it on the right device and in the correct dtype and dimensionality).

Parameters

**data – the data to be transformed. Will be used to determine batchsize, dimensionality, dtype and device

Returns

the (batched) transformation matrix

Return type

torch.Tensor

sample_for_batch(name, batchsize)[source][source]

Sample elements for batch

Parameters
  • name (str) – name of parameter

  • batchsize (int) – batch size

Returns

sampled elements

Return type

Optional[Union[Any, Sequence[Any]]]

Rotate

class rising.transforms.affine.Rotate(rotation, keys=('data', ), grad=False, degree=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, **kwargs)[source][source]

Bases: rising.transforms.affine.BaseAffine

Class Performing a Rotation-OnlyAffine Transformation on a given sample dict. The rotation is applied in consecutive order: rot axis 0 -> rot axis 1 -> rot axis 2 The transformation will be applied to all the dict-entries specified in keys.

Parameters
  • rotation (Union[int, Sequence[int], float, Sequence[float], Tensor, AbstractParameter, Sequence[AbstractParameter]]) – the rotation factor(s). The rotation is performed in consecutive order axis0 -> axis1 (-> axis 2). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a rotation angle of 0

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • degree (bool) – whether the given rotation(s) are in degrees. Only valid for rotation parameters, which aren’t passed as full transformation matrix.

  • output_size (Optional[tuple]) – if given, this will be the resulting image size. Defaults to None

  • adjust_size (bool) – if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.

  • interpolation_mode (str) – interpolation mode to calculate output values 'bilinear' | 'nearest'. Default: 'bilinear'

  • padding_mode (str) – padding mode for outside grid values 'zeros' | 'border' | 'reflection'. Default: 'zeros'

  • align_corners (bool) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.

  • reverse_order (bool) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]

  • **kwargs – additional keyword arguments passed to the affine transform

Translate

class rising.transforms.affine.Translate(translation, keys=('data', ), grad=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, unit='pixel', reverse_order=False, **kwargs)[source][source]

Bases: rising.transforms.affine.BaseAffine

Class Performing an Translation-Only Affine Transformation on a given sample dict. The transformation will be applied to all the dict-entries specified in keys.

Parameters
  • translation (Union[int, Sequence[int], float, Sequence[float], Tensor, AbstractParameter, Sequence[AbstractParameter]]) – torch.Tensor, int, float the translation offset(s) relative to image (should be in the range [0, 1]). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a translation offset of 0

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • output_size (Optional[tuple]) – if given, this will be the resulting image size. Defaults to None

  • adjust_size (bool) – if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.

  • interpolation_mode (str) – interpolation mode to calculate output values 'bilinear' | 'nearest'. Default: 'bilinear'

  • padding_mode (str) – padding mode for outside grid values 'zeros' | 'border' | 'reflection'. Default: 'zeros'

  • align_corners (bool) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.

  • unit (str) – defines the unit of the translation. Either `relative' to the image size or in `pixel'

  • reverse_order (bool) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]

  • **kwargs – additional keyword arguments passed to the affine transform

assemble_matrix(**data)[source][source]

Assembles the matrix (and takes care of batching and having it on the right device and in the correct dtype and dimensionality).

Parameters

**data – the data to be transformed. Will be used to determine batchsize, dimensionality, dtype and device

Returns

the (batched) transformation matrix [N, NDIM, NDIM]

Return type

torch.Tensor

Scale

class rising.transforms.affine.Scale(scale, keys=('data', ), grad=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, **kwargs)[source][source]

Bases: rising.transforms.affine.BaseAffine

Class Performing a Scale-Only Affine Transformation on a given sample dict. The transformation will be applied to all the dict-entries specified in keys.

Parameters
  • scale (Union[int, Sequence[int], float, Sequence[float], Tensor, AbstractParameter, Sequence[AbstractParameter]]) – torch.Tensor, int, float, optional the scale factor(s). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a scaling factor of 1

  • keys (Sequence) – Sequence keys which should be augmented

  • grad (bool) – bool enable gradient computation inside transformation

  • degree – bool whether the given rotation(s) are in degrees. Only valid for rotation parameters, which aren’t passed as full transformation matrix.

  • output_size (Optional[tuple]) – Iterable if given, this will be the resulting image size. Defaults to None

  • adjust_size (bool) – bool if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.

  • interpolation_mode (str) – str interpolation mode to calculate output values ‘bilinear’ | ‘nearest’. Default: ‘bilinear’

  • padding_mode (str) – padding mode for outside grid values ‘zeros’ | ‘border’ | ‘reflection’. Default: ‘zeros’

  • align_corners (bool) – bool Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.

  • reverse_order (bool) – bool reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]

  • **kwargs – additional keyword arguments passed to the affine transform

Resize

class rising.transforms.affine.Resize(size, keys=('data', ), grad=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, **kwargs)[source][source]

Bases: rising.transforms.affine.Scale

Class Performing a Resizing Affine Transformation on a given sample dict. The transformation will be applied to all the dict-entries specified in keys.

Parameters
  • size (Union[int, Tuple[int]]) – the target size. If int, this will be repeated for all the dimensions

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • interpolation_mode (str) – nterpolation mode to calculate output values ‘bilinear’ | ‘nearest’. Default: ‘bilinear’

  • padding_mode (str) – padding mode for outside grid values ‘zeros’ | ‘border’ | ‘reflection’. Default: ‘zeros’

  • align_corners (bool) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.

  • reverse_order (bool) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]

  • **kwargs – additional keyword arguments passed to the affine transform

Notes

The offsets for shifting back and to origin are calculated on the entry matching the first item iin keys for each batch

assemble_matrix(**data)[source][source]

Handles the matrix assembly and calculates the scale factors for resizing

Parameters

**data – the data to be transformed. Will be used to determine batchsize, dimensionality, dtype and device

Returns

the (batched) transformation matrix

Return type

torch.Tensor

Channel Transforms

class rising.transforms.channel.OneHot(num_classes, keys=('seg', ), dtype=None, grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Convert to one hot encoding. One hot encoding is applied in first dimension which results in shape N x NumClasses x [same as input] while input is expected to have shape N x 1 x [arbitrary additional dimensions]

Parameters
  • num_classes (int) – number of classes. If num_classes is None, the number of classes is automatically determined from the current batch (by using the max of the current batch and assuming a consecutive order from zero)

  • dtype (Optional[dtype]) – optionally changes the dtype of the onehot encoding

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to one_hot_batch()

Warning

Input tensor needs to be of type torch.long. This could be achieved by applying TenorOp(“long”, keys=(“seg”,)).

class rising.transforms.channel.ArgMax(dim, keepdim=True, keys=('seg', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Compute argmax along given dimension. Can be used to revert OneHot encoding.

Parameters
  • dim (int) – dimension to apply argmax

  • keepdim (bool) – whether the output tensor has dim retained or not

  • dtype – optionally changes the dtype of the onehot encoding

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to one_hot_batch()

Warnings

The output of the argmax function is always a tensor of dtype long.

OneHot

class rising.transforms.channel.OneHot(num_classes, keys=('seg', ), dtype=None, grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Convert to one hot encoding. One hot encoding is applied in first dimension which results in shape N x NumClasses x [same as input] while input is expected to have shape N x 1 x [arbitrary additional dimensions]

Parameters
  • num_classes (int) – number of classes. If num_classes is None, the number of classes is automatically determined from the current batch (by using the max of the current batch and assuming a consecutive order from zero)

  • dtype (Optional[dtype]) – optionally changes the dtype of the onehot encoding

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to one_hot_batch()

Warning

Input tensor needs to be of type torch.long. This could be achieved by applying TenorOp(“long”, keys=(“seg”,)).

ArgMax

class rising.transforms.channel.ArgMax(dim, keepdim=True, keys=('seg', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Compute argmax along given dimension. Can be used to revert OneHot encoding.

Parameters
  • dim (int) – dimension to apply argmax

  • keepdim (bool) – whether the output tensor has dim retained or not

  • dtype – optionally changes the dtype of the onehot encoding

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to one_hot_batch()

Warnings

The output of the argmax function is always a tensor of dtype long.

Cropping Transforms

class rising.transforms.crop.CenterCrop(size, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Parameters
  • size (Union[int, Sequence, AbstractParameter]) – size of crop

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to augment_fn

class rising.transforms.crop.RandomCrop(size, dist=0, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransformSeeded

Parameters

CenterCrop

class rising.transforms.crop.CenterCrop(size, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Parameters
  • size (Union[int, Sequence, AbstractParameter]) – size of crop

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to augment_fn

RandomCrop

class rising.transforms.crop.RandomCrop(size, dist=0, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransformSeeded

Parameters

Format Transforms

class rising.transforms.format.MapToSeq(*keys, grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Convert dict to sequence

Parameters
  • keys – keys which are mapped into sequence.

  • grad (bool) – enable gradient computation inside transformation

  • kwargs (**) – additional keyword arguments passed to superclass

forward(**data)[source][source]

Convert input

Parameters

data – input dict

Returns

mapped data

Return type

tuple

class rising.transforms.format.SeqToMap(*keys, grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Convert sequence to dict

Parameters
  • keys – keys which are mapped into dict.

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – additional keyword arguments passed to superclass

forward(*data, **kwargs)[source][source]

Convert input

Parameters

data – input tuple

Returns

mapped data

Return type

dict

class rising.transforms.format.PopKeys(keys, return_popped=False)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Pops keys from a given data dict

Parameters
  • keys (Union[Callable, Sequence]) – if callable it must return a boolean for each key indicating whether it should be popped from the dict. if sequence of strings, the strings shall be the keys to be popped

  • return_popped (bool) – whether to also return the popped values (default: False)

forward(**data)[source][source]

Implement transform functionality here

Parameters

**data – dict with data

Returns

dict with transformed data

Return type

dict

class rising.transforms.format.FilterKeys(keys, return_popped=False)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Filters keys from a given data dict

Parameters
  • keys (Union[Callable, Sequence]) – if callable it must return a boolean for each key indicating whether it should be retained in the dict. if sequence of strings, the strings shall be the keys to be retained

  • return_popped (bool) – whether to also return the popped values (default: False)

forward(**data)[source][source]

Implement transform functionality here

Parameters

**data – dict with data

Returns

dict with transformed data

Return type

dict

class rising.transforms.format.RenameKeys(keys)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Rename keys inside batch

Parameters

keys (Mapping[Hashable, Hashable]) – keys of mapping define current name and items define the new names

forward(**data)[source][source]

Implement transform functionality here

Parameters

**data – dict with data

Returns

dict with transformed data

Return type

dict

MapToSeq

class rising.transforms.format.MapToSeq(*keys, grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Convert dict to sequence

Parameters
  • keys – keys which are mapped into sequence.

  • grad (bool) – enable gradient computation inside transformation

  • kwargs (**) – additional keyword arguments passed to superclass

forward(**data)[source][source]

Convert input

Parameters

data – input dict

Returns

mapped data

Return type

tuple

SeqToMap

class rising.transforms.format.SeqToMap(*keys, grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Convert sequence to dict

Parameters
  • keys – keys which are mapped into dict.

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – additional keyword arguments passed to superclass

forward(*data, **kwargs)[source][source]

Convert input

Parameters

data – input tuple

Returns

mapped data

Return type

dict

PopKeys

class rising.transforms.format.PopKeys(keys, return_popped=False)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Pops keys from a given data dict

Parameters
  • keys (Union[Callable, Sequence]) – if callable it must return a boolean for each key indicating whether it should be popped from the dict. if sequence of strings, the strings shall be the keys to be popped

  • return_popped (bool) – whether to also return the popped values (default: False)

forward(**data)[source][source]

Implement transform functionality here

Parameters

**data – dict with data

Returns

dict with transformed data

Return type

dict

FilterKeys

class rising.transforms.format.FilterKeys(keys, return_popped=False)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Filters keys from a given data dict

Parameters
  • keys (Union[Callable, Sequence]) – if callable it must return a boolean for each key indicating whether it should be retained in the dict. if sequence of strings, the strings shall be the keys to be retained

  • return_popped (bool) – whether to also return the popped values (default: False)

forward(**data)[source][source]

Implement transform functionality here

Parameters

**data – dict with data

Returns

dict with transformed data

Return type

dict

RenameKeys

class rising.transforms.format.RenameKeys(keys)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Rename keys inside batch

Parameters

keys (Mapping[Hashable, Hashable]) – keys of mapping define current name and items define the new names

forward(**data)[source][source]

Implement transform functionality here

Parameters

**data – dict with data

Returns

dict with transformed data

Return type

dict

Intensity Transforms

class rising.transforms.intensity.Clamp(min, max, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Apply augment_fn to keys

Parameters
class rising.transforms.intensity.NormRange(min, max, keys=('data', ), per_channel=True, grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.PerSampleTransform

Parameters
class rising.transforms.intensity.NormMinMax(keys=('data', ), per_channel=True, grad=False, eps=1e-08, **kwargs)[source][source]

Bases: rising.transforms.abstract.PerSampleTransform

Norm to [0, 1]

Parameters
  • keys (Sequence) – keys to normalize

  • per_channel (bool) – normalize per channel

  • grad (bool) – enable gradient computation inside transformation

  • eps (Optional[float]) – small constant for numerical stability. If None, no factor constant will be added

  • **kwargs – keyword arguments passed to normalization function

class rising.transforms.intensity.NormZeroMeanUnitStd(keys=('data', ), per_channel=True, grad=False, eps=1e-08, **kwargs)[source][source]

Bases: rising.transforms.abstract.PerSampleTransform

Normalize mean to zero and std to one

Parameters
  • keys (Sequence) – keys to normalize

  • per_channel (bool) – normalize per channel

  • grad (bool) – enable gradient computation inside transformation

  • eps (Optional[float]) – small constant for numerical stability. If None, no factor constant will be added

  • **kwargs – keyword arguments passed to normalization function

class rising.transforms.intensity.NormMeanStd(mean, std, keys=('data', ), per_channel=True, grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.PerSampleTransform

Normalize mean and std with provided values

Parameters
  • mean (Union[float, Sequence[float]]) – used for mean normalization

  • std (Union[float, Sequence[float]]) – used for std normalization

  • keys (Sequence[str]) – keys to normalize

  • per_channel (bool) – normalize per channel

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to normalization function

class rising.transforms.intensity.Noise(noise_type, per_channel=False, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.PerChannelTransform

Add noise to data

Warning

This transform will apply different noise patterns to different keys.

Parameters
  • noise_type (str) – supports all inplace functions of a torch.Tensor

  • per_channel (bool) – enable transformation per channel

  • keys (Sequence) – keys to normalize

  • grad (bool) – enable gradient computation inside transformation

  • kwargs – keyword arguments passed to noise function

See also

torch.Tensor.normal_(), torch.Tensor.exponential_()

class rising.transforms.intensity.GaussianNoise(mean, std, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.intensity.Noise

Add gaussian noise to data

Warning

This transform will apply different noise patterns to different keys.

Parameters
  • mean (float) – mean of normal distribution

  • std (float) – std of normal distribution

  • keys (Sequence) – keys to normalize

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to noise function

class rising.transforms.intensity.ExponentialNoise(lambd, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.intensity.Noise

Add exponential noise to data

Warning

This transform will apply different noise patterns to different keys.

Parameters
  • lambd (float) – lambda of exponential distribution

  • keys (Sequence) – keys to normalize

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to noise function

class rising.transforms.intensity.GammaCorrection(gamma, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Apply Gamma correction

Parameters
  • gamma (Union[float, AbstractParameter]) – define gamma

  • keys (Sequence) – keys to normalize

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to superclass

class rising.transforms.intensity.RandomValuePerChannel(augment_fn, random_sampler, per_channel=False, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.PerChannelTransform

Apply augmentations which take random values as input by keyword value

Warning

This transform will apply different values to different keys.

Parameters
  • augment_fn (callable) – augmentation function

  • random_mode – specifies distribution which should be used to sample additive value. All function from python’s random module are supported

  • random_args – positional arguments passed for random function

  • per_channel (bool) – enable transformation per channel

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to augment_fn

forward(**data)[source][source]

Perform Augmentation.

Parameters

data – dict with data

Returns

augmented data

Return type

dict

class rising.transforms.intensity.RandomAddValue(random_sampler, per_channel=False, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.intensity.RandomValuePerChannel

Increase values additively

Warning

This transform will apply different values to different keys.

Parameters
  • random_sampler (AbstractParameter) – specify values to add

  • per_channel (bool) – enable transformation per channel

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to augment_fn

class rising.transforms.intensity.RandomScaleValue(random_sampler, per_channel=False, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.intensity.RandomValuePerChannel

Scale Values

Warning

This transform will apply different values to different keys.

Parameters
  • random_sampler (AbstractParameter) – specify values to add

  • per_channel (bool) – enable transformation per channel

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to augment_fn

Clamp

class rising.transforms.intensity.Clamp(min, max, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Apply augment_fn to keys

Parameters

NormRange

class rising.transforms.intensity.NormRange(min, max, keys=('data', ), per_channel=True, grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.PerSampleTransform

Parameters

NormMinMax

class rising.transforms.intensity.NormMinMax(keys=('data', ), per_channel=True, grad=False, eps=1e-08, **kwargs)[source][source]

Bases: rising.transforms.abstract.PerSampleTransform

Norm to [0, 1]

Parameters
  • keys (Sequence) – keys to normalize

  • per_channel (bool) – normalize per channel

  • grad (bool) – enable gradient computation inside transformation

  • eps (Optional[float]) – small constant for numerical stability. If None, no factor constant will be added

  • **kwargs – keyword arguments passed to normalization function

NormZeroMeanUnitStd

class rising.transforms.intensity.NormZeroMeanUnitStd(keys=('data', ), per_channel=True, grad=False, eps=1e-08, **kwargs)[source][source]

Bases: rising.transforms.abstract.PerSampleTransform

Normalize mean to zero and std to one

Parameters
  • keys (Sequence) – keys to normalize

  • per_channel (bool) – normalize per channel

  • grad (bool) – enable gradient computation inside transformation

  • eps (Optional[float]) – small constant for numerical stability. If None, no factor constant will be added

  • **kwargs – keyword arguments passed to normalization function

NormMeanStd

class rising.transforms.intensity.NormMeanStd(mean, std, keys=('data', ), per_channel=True, grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.PerSampleTransform

Normalize mean and std with provided values

Parameters
  • mean (Union[float, Sequence[float]]) – used for mean normalization

  • std (Union[float, Sequence[float]]) – used for std normalization

  • keys (Sequence[str]) – keys to normalize

  • per_channel (bool) – normalize per channel

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to normalization function

Noise

class rising.transforms.intensity.Noise(noise_type, per_channel=False, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.PerChannelTransform

Add noise to data

Warning

This transform will apply different noise patterns to different keys.

Parameters
  • noise_type (str) – supports all inplace functions of a torch.Tensor

  • per_channel (bool) – enable transformation per channel

  • keys (Sequence) – keys to normalize

  • grad (bool) – enable gradient computation inside transformation

  • kwargs – keyword arguments passed to noise function

See also

torch.Tensor.normal_(), torch.Tensor.exponential_()

GaussianNoise

class rising.transforms.intensity.GaussianNoise(mean, std, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.intensity.Noise

Add gaussian noise to data

Warning

This transform will apply different noise patterns to different keys.

Parameters
  • mean (float) – mean of normal distribution

  • std (float) – std of normal distribution

  • keys (Sequence) – keys to normalize

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to noise function

ExponentialNoise

class rising.transforms.intensity.ExponentialNoise(lambd, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.intensity.Noise

Add exponential noise to data

Warning

This transform will apply different noise patterns to different keys.

Parameters
  • lambd (float) – lambda of exponential distribution

  • keys (Sequence) – keys to normalize

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to noise function

GammaCorrection

class rising.transforms.intensity.GammaCorrection(gamma, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Apply Gamma correction

Parameters
  • gamma (Union[float, AbstractParameter]) – define gamma

  • keys (Sequence) – keys to normalize

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to superclass

RandomValuePerChannel

class rising.transforms.intensity.RandomValuePerChannel(augment_fn, random_sampler, per_channel=False, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.PerChannelTransform

Apply augmentations which take random values as input by keyword value

Warning

This transform will apply different values to different keys.

Parameters
  • augment_fn (callable) – augmentation function

  • random_mode – specifies distribution which should be used to sample additive value. All function from python’s random module are supported

  • random_args – positional arguments passed for random function

  • per_channel (bool) – enable transformation per channel

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to augment_fn

forward(**data)[source][source]

Perform Augmentation.

Parameters

data – dict with data

Returns

augmented data

Return type

dict

RandomAddValue

class rising.transforms.intensity.RandomAddValue(random_sampler, per_channel=False, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.intensity.RandomValuePerChannel

Increase values additively

Warning

This transform will apply different values to different keys.

Parameters
  • random_sampler (AbstractParameter) – specify values to add

  • per_channel (bool) – enable transformation per channel

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to augment_fn

RandomScaleValue

class rising.transforms.intensity.RandomScaleValue(random_sampler, per_channel=False, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.intensity.RandomValuePerChannel

Scale Values

Warning

This transform will apply different values to different keys.

Parameters
  • random_sampler (AbstractParameter) – specify values to add

  • per_channel (bool) – enable transformation per channel

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to augment_fn

Kernel Transforms

class rising.transforms.kernel.KernelTransform(in_channels, kernel_size, dim=2, stride=1, padding=0, padding_mode='zero', keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Baseclass for kernel based transformations (kernel is applied to each channel individually)

Parameters
  • in_channels (int) – number of input channels

  • kernel_size (Union[int, Sequence]) – size of kernel

  • dim (int) – number of spatial dimensions

  • stride (Union[int, Sequence]) – stride of convolution

  • padding (Union[int, Sequence]) – padding size for input

  • padding_mode (str) – padding mode for input. Supports all modes from torch.functional.pad() except circular

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • kwargs – keyword arguments passed to superclass

See also

torch.functional.pad()

create_kernel()[source][source]

Create kernel for convolution

Return type

Tensor

forward(**data)[source][source]

Apply kernel to selected keys

Parameters

data – input data

Returns

dict with transformed data

Return type

dict

static get_conv(dim)[source][source]

Select convolution with regard to dimension

Parameters

dim – spatial dimension of data

Returns

the suitable convolutional function

Return type

Callable

class rising.transforms.kernel.GaussianSmoothing(in_channels, kernel_size, std, dim=2, stride=1, padding=0, padding_mode='reflect', keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.kernel.KernelTransform

Perform Gaussian Smoothing. Filtering is performed seperately for each channel in the input using a depthwise convolution. This code is adapted from: ‘https://discuss.pytorch.org/t/is-there-anyway-to-do-’ ‘gaussian-filtering-for-an-image-2d-3d-in-pytorch/12351/10’

Parameters
  • in_channels (int) – number of input channels

  • kernel_size (Union[int, Sequence]) – size of kernel

  • std (Union[int, Sequence]) – standard deviation of gaussian

  • dim (int) – number of spatial dimensions

  • stride (Union[int, Sequence]) – stride of convolution

  • padding (Union[int, Sequence]) – padding size for input

  • padding_mode (str) – padding mode for input. Supports all modes from torch.functional.pad() except circular

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to superclass

See also

torch.functional.pad()

create_kernel()[source][source]

Create gaussian blur kernel

Return type

Tensor

KernelTransform

class rising.transforms.kernel.KernelTransform(in_channels, kernel_size, dim=2, stride=1, padding=0, padding_mode='zero', keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Baseclass for kernel based transformations (kernel is applied to each channel individually)

Parameters
  • in_channels (int) – number of input channels

  • kernel_size (Union[int, Sequence]) – size of kernel

  • dim (int) – number of spatial dimensions

  • stride (Union[int, Sequence]) – stride of convolution

  • padding (Union[int, Sequence]) – padding size for input

  • padding_mode (str) – padding mode for input. Supports all modes from torch.functional.pad() except circular

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • kwargs – keyword arguments passed to superclass

See also

torch.functional.pad()

create_kernel()[source][source]

Create kernel for convolution

Return type

Tensor

forward(**data)[source][source]

Apply kernel to selected keys

Parameters

data – input data

Returns

dict with transformed data

Return type

dict

static get_conv(dim)[source][source]

Select convolution with regard to dimension

Parameters

dim – spatial dimension of data

Returns

the suitable convolutional function

Return type

Callable

GaussianSmoothing

class rising.transforms.kernel.GaussianSmoothing(in_channels, kernel_size, std, dim=2, stride=1, padding=0, padding_mode='reflect', keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.kernel.KernelTransform

Perform Gaussian Smoothing. Filtering is performed seperately for each channel in the input using a depthwise convolution. This code is adapted from: ‘https://discuss.pytorch.org/t/is-there-anyway-to-do-’ ‘gaussian-filtering-for-an-image-2d-3d-in-pytorch/12351/10’

Parameters
  • in_channels (int) – number of input channels

  • kernel_size (Union[int, Sequence]) – size of kernel

  • std (Union[int, Sequence]) – standard deviation of gaussian

  • dim (int) – number of spatial dimensions

  • stride (Union[int, Sequence]) – stride of convolution

  • padding (Union[int, Sequence]) – padding size for input

  • padding_mode (str) – padding mode for input. Supports all modes from torch.functional.pad() except circular

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to superclass

See also

torch.functional.pad()

create_kernel()[source][source]

Create gaussian blur kernel

Return type

Tensor

Spatial Transforms

class rising.transforms.spatial.Mirror(dims, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Random mirror transform

Parameters
  • dims (Union[int, DiscreteParameter, Sequence[Union[int, DiscreteParameter]]]) – axes which should be mirrored

  • keys (Sequence[str]) – keys which should be mirrored

  • prob – probability for mirror. If float value is provided, it is used for all dims

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to superclass

Examples

>>> # Use mirror transform for augmentations
>>> from rising.random import DiscreteCombinationsParameter
>>> # We sample from all possible mirror combination for
>>> # volumetric data
>>> trafo = Mirror(DiscreteCombinationsParameter((0, 1, 2)))
class rising.transforms.spatial.Rot90(dims, keys=('data', ), num_rots=(0, 1, 2, 3), prob=0.5, grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Rotate 90 degree around dims

Parameters
  • dims (Union[Sequence[int], DiscreteParameter]) – dims/axis ro rotate. If more than two dims are provided, 2 dimensions are randomly chosen at each call

  • keys (Sequence[str]) – keys which should be rotated

  • num_rots (Sequence[int]) – possible values for number of rotations

  • prob (float) – probability for rotation

  • grad (bool) – enable gradient computation inside transformation

  • kwargs – keyword arguments passed to superclass

See also

torch.Tensor.rot90()

forward(**data)[source][source]

Apply transformation

Parameters

data – dict with tensors

Returns

dict with augmented data

Return type

dict

class rising.transforms.spatial.ResizeNative(size, mode='nearest', align_corners=None, preserve_range=False, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Resize data to given size

Parameters
  • size (Union[int, Sequence[int]]) – spatial output size (excluding batch size and number of channels)

  • mode (str) – one of nearest, linear, bilinear, bicubic, trilinear, area (for more inforamtion see torch.nn.functional.interpolate())

  • align_corners (Optional[bool]) – input and output tensors are aligned by the center points of their corners pixels, preserving the values at the corner pixels.

  • preserve_range (bool) – output tensor has same range as input tensor

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to augment_fn

class rising.transforms.spatial.Zoom(scale_factor=(0.75, 1.25), mode='nearest', align_corners=None, preserve_range=False, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Apply augment_fn to keys. By default the scaling factor is sampled from a uniform distribution with the range specified by random_args

Parameters
  • scale_factor (Union[Sequence, AbstractParameter]) – positional arguments passed for random function. If Sequence[Sequence] is provided, a random value for each item in the outer Sequence is generated. This can be used to set different ranges for different axis.

  • mode (str) – one of nearest, linear, bilinear, bicubic, trilinear, area (for more inforamtion see torch.nn.functional.interpolate())

  • align_corners (Optional[bool]) – input and output tensors are aligned by the center points of their corners pixels, preserving the values at the corner pixels.

  • preserve_range (bool) – output tensor has same range as input tensor

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to augment_fn

class rising.transforms.spatial.ProgressiveResize(scheduler, mode='nearest', align_corners=None, preserve_range=False, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.spatial.ResizeNative

Resize data to sizes specified by scheduler

Parameters
  • scheduler (Callable[[int], Union[int, Sequence[int]]]) – scheduler which determined the current size. The scheduler is called with the current iteration of the transform

  • mode (str) – one of nearest, linear, bilinear, bicubic, trilinear, area (for more inforamtion see torch.nn.functional.interpolate())

  • align_corners (Optional[bool]) – input and output tensors are aligned by the center points of their corners pixels, preserving the values at the corner pixels.

  • preserve_range (bool) – output tensor has same range as input tensor

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to augment_fn

Warning

When this transformations is used in combination with multiprocessing, the step counter is not perfectly synchronized between multiple processes. As a result the step count my jump between values in a range of the number of processes used.

forward(**data)[source][source]

Resize data

Parameters

**data – input batch

Returns

augmented batch

Return type

dict

increment()[source][source]

Increment step by 1

Returns

returns self to allow chaining

Return type

ResizeNative

reset_step()[source][source]

Reset step to 0

Returns

returns self to allow chaining

Return type

ResizeNative

property step[source]

Current step

Returns

number of steps

Return type

int

class rising.transforms.spatial.SizeStepScheduler(milestones, sizes)[source][source]

Bases: object

Scheduler return size when milestone is reached

Parameters
__call__(step)[source][source]

Return size with regard to milestones

Parameters

step – current step

Returns

current size

Return type

Union[int, Sequence[int], Sequence[Sequence[int]]]

Mirror

class rising.transforms.spatial.Mirror(dims, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Random mirror transform

Parameters
  • dims (Union[int, DiscreteParameter, Sequence[Union[int, DiscreteParameter]]]) – axes which should be mirrored

  • keys (Sequence[str]) – keys which should be mirrored

  • prob – probability for mirror. If float value is provided, it is used for all dims

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to superclass

Examples

>>> # Use mirror transform for augmentations
>>> from rising.random import DiscreteCombinationsParameter
>>> # We sample from all possible mirror combination for
>>> # volumetric data
>>> trafo = Mirror(DiscreteCombinationsParameter((0, 1, 2)))

Rot90

class rising.transforms.spatial.Rot90(dims, keys=('data', ), num_rots=(0, 1, 2, 3), prob=0.5, grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Rotate 90 degree around dims

Parameters
  • dims (Union[Sequence[int], DiscreteParameter]) – dims/axis ro rotate. If more than two dims are provided, 2 dimensions are randomly chosen at each call

  • keys (Sequence[str]) – keys which should be rotated

  • num_rots (Sequence[int]) – possible values for number of rotations

  • prob (float) – probability for rotation

  • grad (bool) – enable gradient computation inside transformation

  • kwargs – keyword arguments passed to superclass

See also

torch.Tensor.rot90()

forward(**data)[source][source]

Apply transformation

Parameters

data – dict with tensors

Returns

dict with augmented data

Return type

dict

ResizeNative

class rising.transforms.spatial.ResizeNative(size, mode='nearest', align_corners=None, preserve_range=False, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Resize data to given size

Parameters
  • size (Union[int, Sequence[int]]) – spatial output size (excluding batch size and number of channels)

  • mode (str) – one of nearest, linear, bilinear, bicubic, trilinear, area (for more inforamtion see torch.nn.functional.interpolate())

  • align_corners (Optional[bool]) – input and output tensors are aligned by the center points of their corners pixels, preserving the values at the corner pixels.

  • preserve_range (bool) – output tensor has same range as input tensor

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to augment_fn

Zoom

class rising.transforms.spatial.Zoom(scale_factor=(0.75, 1.25), mode='nearest', align_corners=None, preserve_range=False, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Apply augment_fn to keys. By default the scaling factor is sampled from a uniform distribution with the range specified by random_args

Parameters
  • scale_factor (Union[Sequence, AbstractParameter]) – positional arguments passed for random function. If Sequence[Sequence] is provided, a random value for each item in the outer Sequence is generated. This can be used to set different ranges for different axis.

  • mode (str) – one of nearest, linear, bilinear, bicubic, trilinear, area (for more inforamtion see torch.nn.functional.interpolate())

  • align_corners (Optional[bool]) – input and output tensors are aligned by the center points of their corners pixels, preserving the values at the corner pixels.

  • preserve_range (bool) – output tensor has same range as input tensor

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to augment_fn

ProgressiveResize

class rising.transforms.spatial.ProgressiveResize(scheduler, mode='nearest', align_corners=None, preserve_range=False, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.spatial.ResizeNative

Resize data to sizes specified by scheduler

Parameters
  • scheduler (Callable[[int], Union[int, Sequence[int]]]) – scheduler which determined the current size. The scheduler is called with the current iteration of the transform

  • mode (str) – one of nearest, linear, bilinear, bicubic, trilinear, area (for more inforamtion see torch.nn.functional.interpolate())

  • align_corners (Optional[bool]) – input and output tensors are aligned by the center points of their corners pixels, preserving the values at the corner pixels.

  • preserve_range (bool) – output tensor has same range as input tensor

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to augment_fn

Warning

When this transformations is used in combination with multiprocessing, the step counter is not perfectly synchronized between multiple processes. As a result the step count my jump between values in a range of the number of processes used.

forward(**data)[source][source]

Resize data

Parameters

**data – input batch

Returns

augmented batch

Return type

dict

increment()[source][source]

Increment step by 1

Returns

returns self to allow chaining

Return type

ResizeNative

reset_step()[source][source]

Reset step to 0

Returns

returns self to allow chaining

Return type

ResizeNative

property step[source]

Current step

Returns

number of steps

Return type

int

SizeStepScheduler

class rising.transforms.spatial.SizeStepScheduler(milestones, sizes)[source][source]

Bases: object

Scheduler return size when milestone is reached

Parameters
__call__(step)[source][source]

Return size with regard to milestones

Parameters

step – current step

Returns

current size

Return type

Union[int, Sequence[int], Sequence[Sequence[int]]]

Tensor Transforms

class rising.transforms.tensor.ToTensor(keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Transform Input Collection to Collection of torch.Tensor

Parameters
  • keys (Sequence) – keys which should be transformed

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to augment_fn

class rising.transforms.tensor.ToDeviceDtype(device=None, dtype=None, non_blocking=False, copy=False, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Push data to device and convert to tdype

Parameters
  • device (Union[device, str, None]) – target device

  • dtype (Optional[dtype]) – target dtype

  • non_blocking (bool) – if True and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect.

  • copy (bool) – create copy of data

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to function

class rising.transforms.tensor.ToDevice(device, non_blocking=False, copy=False, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.tensor.ToDeviceDtype

Push data to device

Parameters
  • device (Union[device, str, None]) – target device

  • non_blocking (bool) – if True and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect.

  • copy (bool) – create copy of data

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to function

class rising.transforms.tensor.ToDtype(dtype, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.tensor.ToDeviceDtype

Convert data to dtype

Parameters
  • dtype (dtype) – target dtype

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • kwargs – keyword arguments passed to function

class rising.transforms.tensor.TensorOp(op_name, *args, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Apply function which are supported by the torch.Tensor class

Parameters
  • op_name (str) – name of tensor operation

  • *args – positional arguments passed to function

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to function

class rising.transforms.tensor.Permute(dims, grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Permute dimensions of tensor

Parameters
  • dims (Dict[str, Sequence[int]]) – defines permutation sequence for respective key

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to permute function

forward(**data)[source][source]

Forward input

Args: data: batch dict

Returns

augmented data

Return type

dict

ToTensor

class rising.transforms.tensor.ToTensor(keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Transform Input Collection to Collection of torch.Tensor

Parameters
  • keys (Sequence) – keys which should be transformed

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to augment_fn

ToDeviceDtype

class rising.transforms.tensor.ToDeviceDtype(device=None, dtype=None, non_blocking=False, copy=False, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Push data to device and convert to tdype

Parameters
  • device (Union[device, str, None]) – target device

  • dtype (Optional[dtype]) – target dtype

  • non_blocking (bool) – if True and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect.

  • copy (bool) – create copy of data

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to function

ToDevice

class rising.transforms.tensor.ToDevice(device, non_blocking=False, copy=False, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.tensor.ToDeviceDtype

Push data to device

Parameters
  • device (Union[device, str, None]) – target device

  • non_blocking (bool) – if True and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect.

  • copy (bool) – create copy of data

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to function

ToDtype

class rising.transforms.tensor.ToDtype(dtype, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.tensor.ToDeviceDtype

Convert data to dtype

Parameters
  • dtype (dtype) – target dtype

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • kwargs – keyword arguments passed to function

TensorOp

class rising.transforms.tensor.TensorOp(op_name, *args, keys=('data', ), grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Apply function which are supported by the torch.Tensor class

Parameters
  • op_name (str) – name of tensor operation

  • *args – positional arguments passed to function

  • keys (Sequence) – keys which should be augmented

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to function

Permute

class rising.transforms.tensor.Permute(dims, grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.BaseTransform

Permute dimensions of tensor

Parameters
  • dims (Dict[str, Sequence[int]]) – defines permutation sequence for respective key

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to permute function

forward(**data)[source][source]

Forward input

Args: data: batch dict

Returns

augmented data

Return type

dict

Utility Transforms

class rising.transforms.utility.DoNothing(grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Transform that returns the input as is

Parameters
  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to superclass

forward(**data)[source][source]

Forward input

Parameters

data – input dict

Return type

dict

Returns

input dict

class rising.transforms.utility.SegToBox(keys, grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Convert instance segmentation to bounding boxes

Parameters
  • keys (Mapping[Hashable, Hashable]) – the key specifies which item to use as segmentation and the item specifies where the save the bounding boxes

  • grad (bool) – enable gradient computation inside transformation

forward(**data)[source][source]
Parameters

**data – input data

Returns

transformed data

Return type

dict

class rising.transforms.utility.BoxToSeg(keys, shape, dtype, device, grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Convert bounding boxes to instance segmentation

Parameters
  • keys (Mapping[Hashable, Hashable]) – the key specifies which item to use as the bounding boxes and the item specifies where the save the bounding boxes

  • shape (Sequence[int]) – spatial shape of output tensor (batchsize is derived from bounding boxes and has one channel)

  • dtype (dtype) – dtype of segmentation

  • device (Union[device, str]) – device of segmentation

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – Additional keyword arguments forwarded to the Base Class

forward(**data)[source][source]

Forward input

Parameters

**data – input data

Returns

transformed data

Return type

dict

class rising.transforms.utility.InstanceToSemantic(keys, cls_key, grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Convert an instance segmentation to a semantic segmentation

Parameters
  • keys (Mapping[str, str]) – the key specifies which item to use as instance segmentation and the item specifies where the save the semantic segmentation

  • cls_key (Hashable) – key where the class mapping is saved. Mapping needs to be a Sequence{Sequence[int]].

  • grad (bool) – enable gradient computation inside transformation

forward(**data)[source][source]

Forward input

Parameters

**data – input data

Returns

transformed data

Return type

dict

DoNothing

class rising.transforms.utility.DoNothing(grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Transform that returns the input as is

Parameters
  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – keyword arguments passed to superclass

forward(**data)[source][source]

Forward input

Parameters

data – input dict

Return type

dict

Returns

input dict

SegToBox

class rising.transforms.utility.SegToBox(keys, grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Convert instance segmentation to bounding boxes

Parameters
  • keys (Mapping[Hashable, Hashable]) – the key specifies which item to use as segmentation and the item specifies where the save the bounding boxes

  • grad (bool) – enable gradient computation inside transformation

forward(**data)[source][source]
Parameters

**data – input data

Returns

transformed data

Return type

dict

BoxToSeg

class rising.transforms.utility.BoxToSeg(keys, shape, dtype, device, grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Convert bounding boxes to instance segmentation

Parameters
  • keys (Mapping[Hashable, Hashable]) – the key specifies which item to use as the bounding boxes and the item specifies where the save the bounding boxes

  • shape (Sequence[int]) – spatial shape of output tensor (batchsize is derived from bounding boxes and has one channel)

  • dtype (dtype) – dtype of segmentation

  • device (Union[device, str]) – device of segmentation

  • grad (bool) – enable gradient computation inside transformation

  • **kwargs – Additional keyword arguments forwarded to the Base Class

forward(**data)[source][source]

Forward input

Parameters

**data – input data

Returns

transformed data

Return type

dict

InstanceToSemantic

class rising.transforms.utility.InstanceToSemantic(keys, cls_key, grad=False, **kwargs)[source][source]

Bases: rising.transforms.abstract.AbstractTransform

Convert an instance segmentation to a semantic segmentation

Parameters
  • keys (Mapping[str, str]) – the key specifies which item to use as instance segmentation and the item specifies where the save the semantic segmentation

  • cls_key (Hashable) – key where the class mapping is saved. Mapping needs to be a Sequence{Sequence[int]].

  • grad (bool) – enable gradient computation inside transformation

forward(**data)[source][source]

Forward input

Parameters

**data – input data

Returns

transformed data

Return type

dict


© Copyright Copyright (c) 2019-2020, Justus Schock, Michael Baumgartner.. Revision ca0cf77f.

Read the Docs v: v0.2.1
Versions
latest
stable
v0.2.1
v0.2.0post0
v0.2.0
Downloads
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.