rising.transforms¶
Provides the Augmentations and Transforms used by the
rising.loading.DataLoader
.
Implementations include:
Transformation Base Classes
Composed Transforms
Affine Transforms
Channel Transforms
Cropping Transforms
Device Transforms
Format Transforms
Intensity Transforms
Kernel Transforms
Spatial Transforms
Tensor Transforms
Utility Transforms
Transformation Base Classes¶
-
class
rising.transforms.abstract.
AbstractTransform
(grad=False, **kwargs)[source][source]¶ Bases:
torch.nn.Module
Base class for all transforms
- Parameters
grad (
bool
) – enable gradient computation inside transformation
-
__call__
(*args, **kwargs)[source][source]¶ Call super class with correct torch context
- Parameters
*args – forwarded positional arguments
**kwargs – forwarded keyword arguments
- Returns
transformed data
- Return type
Any
-
forward
(**data)[source][source]¶ Implement transform functionality here
- Parameters
**data – dict with data
- Returns
dict with transformed data
- Return type
-
register_sampler
(name, sampler, *args, **kwargs)[source][source]¶ Registers a parameter sampler to the transform. Internally a property is created to forward calls to the attribute to calls of the sampler.
- Parameters
name (
str
) – the property namesampler (
Union
[Sequence
,AbstractParameter
]) – the sampler. Will be wrapped to a sampler always returning the same element if not already a sampler*args – additional positional arguments (will be forwarded to sampler call)
**kwargs – additional keyword arguments (will be forwarded to sampler call)
-
class
rising.transforms.abstract.
BaseTransform
(augment_fn, *args, keys=('data', ), grad=False, property_names=(), **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.AbstractTransform
Transform to apply a functional interface to given keys
- Parameters
augment_fn (
Callable
[[Tensor
],Any
]) – function for augmentation*args – positional arguments passed to augment_fn
keys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformationproperty_names (
Sequence
[str
]) – a tuple containing all the properties to call during forward pass**kwargs – keyword arguments passed to augment_fn
-
class
rising.transforms.abstract.
PerSampleTransform
(augment_fn, *args, keys=('data', ), grad=False, property_names=(), **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.BaseTransform
Apply transformation to each sample in batch individually
augment_fn
must be callable with optionout
where results are saved in- Parameters
augment_fn (
Callable
[[Tensor
],Any
]) – function for augmentation*args – positional arguments passed to augment_fn
keys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformationproperty_names (
Sequence
[str
]) – a tuple containing all the properties to call during forward pass**kwargs – keyword arguments passed to augment_fn
-
class
rising.transforms.abstract.
PerChannelTransform
(augment_fn, per_channel=False, keys=('data', ), grad=False, property_names=(), **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.BaseTransform
Apply transformation per channel (but still to whole batch)
- Parameters
AbstractTransform¶
-
class
rising.transforms.abstract.
AbstractTransform
(grad=False, **kwargs)[source][source] Bases:
torch.nn.Module
Base class for all transforms
- Parameters
grad (
bool
) – enable gradient computation inside transformation
-
__call__
(*args, **kwargs)[source][source] Call super class with correct torch context
- Parameters
*args – forwarded positional arguments
**kwargs – forwarded keyword arguments
- Returns
transformed data
- Return type
Any
-
forward
(**data)[source][source] Implement transform functionality here
- Parameters
**data – dict with data
- Returns
dict with transformed data
- Return type
-
register_sampler
(name, sampler, *args, **kwargs)[source][source] Registers a parameter sampler to the transform. Internally a property is created to forward calls to the attribute to calls of the sampler.
- Parameters
name (
str
) – the property namesampler (
Union
[Sequence
,AbstractParameter
]) – the sampler. Will be wrapped to a sampler always returning the same element if not already a sampler*args – additional positional arguments (will be forwarded to sampler call)
**kwargs – additional keyword arguments (will be forwarded to sampler call)
BaseTransform¶
-
class
rising.transforms.abstract.
BaseTransform
(augment_fn, *args, keys=('data', ), grad=False, property_names=(), **kwargs)[source][source] Bases:
rising.transforms.abstract.AbstractTransform
Transform to apply a functional interface to given keys
- Parameters
augment_fn (
Callable
[[Tensor
],Any
]) – function for augmentation*args – positional arguments passed to augment_fn
keys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformationproperty_names (
Sequence
[str
]) – a tuple containing all the properties to call during forward pass**kwargs – keyword arguments passed to augment_fn
PerSampleTransform¶
-
class
rising.transforms.abstract.
PerSampleTransform
(augment_fn, *args, keys=('data', ), grad=False, property_names=(), **kwargs)[source][source] Bases:
rising.transforms.abstract.BaseTransform
Apply transformation to each sample in batch individually
augment_fn
must be callable with optionout
where results are saved in- Parameters
augment_fn (
Callable
[[Tensor
],Any
]) – function for augmentation*args – positional arguments passed to augment_fn
keys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformationproperty_names (
Sequence
[str
]) – a tuple containing all the properties to call during forward pass**kwargs – keyword arguments passed to augment_fn
PerChannelTransform¶
-
class
rising.transforms.abstract.
PerChannelTransform
(augment_fn, per_channel=False, keys=('data', ), grad=False, property_names=(), **kwargs)[source][source] Bases:
rising.transforms.abstract.BaseTransform
Apply transformation per channel (but still to whole batch)
- Parameters
Compose Transforms¶
-
class
rising.transforms.compose.
Compose
(*transforms, shuffle=False, transform_call=<function dict_call>)[source][source]¶ Bases:
rising.transforms.abstract.AbstractTransform
Compose multiple transforms
- Parameters
transforms (
Union
[AbstractTransform
,Sequence
[AbstractTransform
]]) – one or multiple transformations which are applied in consecutive ordershuffle (
bool
) – apply transforms in random ordertransform_call (
Callable
[[Any
,Callable
],Any
]) – function which determines how transforms are called. By default Mappings and Sequences are unpacked during the transform.
-
forward
(*seq_like, **map_like)[source][source]¶ Apply transforms in a consecutive order. Can either handle Sequence like or Mapping like data.
- Parameters
*seq_like – data which is unpacked like a Sequence
**map_like – data which is unpacked like a dict
- Returns
transformed data
- Return type
Union[Sequence, Mapping]
-
property
shuffle
[source]¶ Getter for attribute shuffle
- Returns
True if shuffle is enabled, False otherwise
- Return type
-
class
rising.transforms.compose.
DropoutCompose
(*transforms, dropout=0.5, shuffle=False, random_sampler=None, transform_call=<function dict_call>, **kwargs)[source][source]¶ Bases:
rising.transforms.compose.Compose
Compose multiple transforms to one and randomly apply them
- Parameters
*transforms – one or multiple transformations which are applied in consecutive order
dropout (
Union
[float
,Sequence
[float
]]) – if provided as float, each transform is skipped with the given probability ifdropout
is a sequence, it needs to specify the dropout probability for each given transformshuffle (
bool
) – apply transforms in random orderrandom_sampler (
Optional
[ContinuousParameter
]) – a continuous parameter sampler. Samples a random value for each of the transforms.transform_call (
Callable
[[Any
,Callable
],Any
]) – function which determines how transforms are called. By default Mappings and Sequences are unpacked during the transform.
- Raises
ValueError – if dropout is a sequence it must have the same length as transforms
-
forward
(*seq_like, **map_like)[source][source]¶ Apply transforms in a consecutive order. Can either handle Sequence like or Mapping like data.
- Parameters
*seq_like – data which is unpacked like a Sequence
**map_like – data which is unpacked like a dict
- Returns
dict with transformed data
- Return type
Union[Sequence, Mapping]
-
class
rising.transforms.compose.
OneOf
(*transforms, weights=None, p=1.0, transform_call=<function dict_call>)[source][source]¶ Bases:
rising.transforms.abstract.AbstractTransform
Apply one of the given transforms.
- Parameters
*transforms – transforms to choose from
weights (
Optional
[Sequence
[float
]]) – additional weights for transformsp (
float
) – probability that one transform i appliedtransform_call (
Callable
[[Any
,Callable
],Any
]) – function which determines how transforms are called. By default Mappings and Sequences are unpacked during the transform.
Compose¶
-
class
rising.transforms.compose.
Compose
(*transforms, shuffle=False, transform_call=<function dict_call>)[source][source] Bases:
rising.transforms.abstract.AbstractTransform
Compose multiple transforms
- Parameters
transforms (
Union
[AbstractTransform
,Sequence
[AbstractTransform
]]) – one or multiple transformations which are applied in consecutive ordershuffle (
bool
) – apply transforms in random ordertransform_call (
Callable
[[Any
,Callable
],Any
]) – function which determines how transforms are called. By default Mappings and Sequences are unpacked during the transform.
-
forward
(*seq_like, **map_like)[source][source] Apply transforms in a consecutive order. Can either handle Sequence like or Mapping like data.
- Parameters
*seq_like – data which is unpacked like a Sequence
**map_like – data which is unpacked like a dict
- Returns
transformed data
- Return type
Union[Sequence, Mapping]
-
property
shuffle
[source] Getter for attribute shuffle
- Returns
True if shuffle is enabled, False otherwise
- Return type
-
property
transforms
[source] Transforms getter
- Returns
transforms to compose
- Return type
DropoutCompose¶
-
class
rising.transforms.compose.
DropoutCompose
(*transforms, dropout=0.5, shuffle=False, random_sampler=None, transform_call=<function dict_call>, **kwargs)[source][source] Bases:
rising.transforms.compose.Compose
Compose multiple transforms to one and randomly apply them
- Parameters
*transforms – one or multiple transformations which are applied in consecutive order
dropout (
Union
[float
,Sequence
[float
]]) – if provided as float, each transform is skipped with the given probability ifdropout
is a sequence, it needs to specify the dropout probability for each given transformshuffle (
bool
) – apply transforms in random orderrandom_sampler (
Optional
[ContinuousParameter
]) – a continuous parameter sampler. Samples a random value for each of the transforms.transform_call (
Callable
[[Any
,Callable
],Any
]) – function which determines how transforms are called. By default Mappings and Sequences are unpacked during the transform.
- Raises
ValueError – if dropout is a sequence it must have the same length as transforms
-
forward
(*seq_like, **map_like)[source][source] Apply transforms in a consecutive order. Can either handle Sequence like or Mapping like data.
- Parameters
*seq_like – data which is unpacked like a Sequence
**map_like – data which is unpacked like a dict
- Returns
dict with transformed data
- Return type
Union[Sequence, Mapping]
OneOf¶
-
class
rising.transforms.compose.
OneOf
(*transforms, weights=None, p=1.0, transform_call=<function dict_call>)[source][source] Bases:
rising.transforms.abstract.AbstractTransform
Apply one of the given transforms.
- Parameters
*transforms – transforms to choose from
weights (
Optional
[Sequence
[float
]]) – additional weights for transformsp (
float
) – probability that one transform i appliedtransform_call (
Callable
[[Any
,Callable
],Any
]) – function which determines how transforms are called. By default Mappings and Sequences are unpacked during the transform.
Affine Transforms¶
-
class
rising.transforms.affine.
Affine
(matrix=None, keys=('data', ), grad=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, per_sample=True, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.BaseTransform
Class Performing an Affine Transformation on a given sample dict. The transformation will be applied to all the dict-entries specified in
keys
.- Parameters
matrix (
Union
[Tensor
,Sequence
[Sequence
[float
]],None
]) – if given, overwrites the parameters forscale
, :attr:rotation` andtranslation
. Should be a matrix of shape [(BATCHSIZE,) NDIM, NDIM(+1)] This matrix represents the whole transformation matrixkeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformationoutput_size (
Optional
[tuple
]) – if given, this will be the resulting image size. Defaults toNone
adjust_size (
bool
) – if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.interpolation_mode (
str
) – interpolation mode to calculate output values'bilinear'
|'nearest'
. Default:'bilinear'
padding_mode (
str
) – padding mode for outside grid values'zeros
’ |'border'
|'reflection'
. Default:'zeros'
align_corners (
bool
) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.reverse_order (
bool
) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]per_sample (
bool
) – sample different values for each element in the batch. The transform is still applied in a batched wise fashion.**kwargs – additional keyword arguments passed to the affine transform
-
assemble_matrix
(**data)[source][source]¶ Assembles the matrix (and takes care of batching and having it on the right device and in the correct dtype and dimensionality).
- Parameters
**data – the data to be transformed. Will be used to determine batchsize, dimensionality, dtype and device
- Returns
the (batched) transformation matrix
- Return type
-
class
rising.transforms.affine.
BaseAffine
(scale=None, rotation=None, translation=None, degree=False, image_transform=True, keys=('data', ), grad=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, per_sample=True, **kwargs)[source][source]¶ Bases:
rising.transforms.affine.Affine
Class performing a basic Affine Transformation on a given sample dict. The transformation will be applied to all the dict-entries specified in
keys
.- Parameters
scale (
Union
[int
,Sequence
[int
],float
,Sequence
[float
],Tensor
,AbstractParameter
,Sequence
[AbstractParameter
],None
]) – the scale factor(s). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a scaling factor of 1rotation (
Union
[int
,Sequence
[int
],float
,Sequence
[float
],Tensor
,AbstractParameter
,Sequence
[AbstractParameter
],None
]) – the rotation factor(s). The rotation is performed in consecutive order axis0 -> axis1 (-> axis 2). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a rotation angle of 0translation (
Union
[int
,Sequence
[int
],float
,Sequence
[float
],Tensor
,AbstractParameter
,Sequence
[AbstractParameter
],None
]) – torch.Tensor, int, float the translation offset(s) relative to image (should be in the range [0, 1]). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a translation offset of 0keys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformationdegree (
bool
) – whether the given rotation(s) are in degrees. Only valid for rotation parameters, which aren’t passed as full transformation matrix.output_size (
Optional
[tuple
]) – if given, this will be the resulting image size. Defaults toNone
adjust_size (
bool
) – if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.interpolation_mode (
str
) – interpolation mode to calculate output values'bilinear'
|'nearest'
. Default:'bilinear'
padding_mode (
str
) – padding mode for outside grid values'zeros'
|'border'
|'reflection'
. Default:'zeros'
align_corners (
bool
) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.reverse_order (
bool
) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]per_sample (
bool
) – sample different values for each element in the batch. The transform is still applied in a batched wise fashion.**kwargs – additional keyword arguments passed to the affine transform
-
assemble_matrix
(**data)[source][source]¶ Assembles the matrix (and takes care of batching and having it on the right device and in the correct dtype and dimensionality).
- Parameters
**data – the data to be transformed. Will be used to determine batchsize, dimensionality, dtype and device
- Returns
the (batched) transformation matrix
- Return type
-
class
rising.transforms.affine.
StackedAffine
(*transforms, keys=('data', ), grad=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, **kwargs)[source][source]¶ Bases:
rising.transforms.affine.Affine
Class to stack multiple affines with dynamic ensembling by matrix multiplication to avoid multiple interpolations.
- Parameters
transforms (
Union
[Affine
,Sequence
[Union
[Sequence
[Affine
],Affine
]]]) – the transforms to stack. Each transform must have a function calledassemble_matrix
, which is called to dynamically assemble stacked matrices. Afterwards these transformations are stacked by matrix-multiplication to only perform a single interpolationkeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformationoutput_size (
Optional
[tuple
]) – if given, this will be the resulting image size. Defaults toNone
adjust_size (
bool
) – if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.interpolation_mode (
str
) – interpolation mode to calculate output values'bilinear'
|'nearest'
. Default:'bilinear'
padding_mode (
str
) – padding mode for outside grid values'zeros'
|'border'
|'reflection'
. Default:'zeros'
align_corners (
bool
) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.reverse_order (
bool
) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]**kwargs – additional keyword arguments passed to the affine transform
-
class
rising.transforms.affine.
Rotate
(rotation, keys=('data', ), grad=False, degree=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, **kwargs)[source][source]¶ Bases:
rising.transforms.affine.BaseAffine
Class Performing a Rotation-OnlyAffine Transformation on a given sample dict. The rotation is applied in consecutive order: rot axis 0 -> rot axis 1 -> rot axis 2 The transformation will be applied to all the dict-entries specified in
keys
.- Parameters
rotation (
Union
[int
,Sequence
[int
],float
,Sequence
[float
],Tensor
,AbstractParameter
,Sequence
[AbstractParameter
]]) – the rotation factor(s). The rotation is performed in consecutive order axis0 -> axis1 (-> axis 2). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples *None
will be treated as a rotation angle of 0keys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformationdegree (
bool
) – whether the given rotation(s) are in degrees. Only valid for rotation parameters, which aren’t passed as full transformation matrix.output_size (
Optional
[tuple
]) – if given, this will be the resulting image size. Defaults toNone
adjust_size (
bool
) – if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.interpolation_mode (
str
) – interpolation mode to calculate output values'bilinear'
|'nearest'
. Default:'bilinear'
padding_mode (
str
) – padding mode for outside grid values'zeros'
|'border'
|'reflection'
. Default:'zeros'
align_corners (
bool
) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.reverse_order (
bool
) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]**kwargs – additional keyword arguments passed to the affine transform
-
class
rising.transforms.affine.
Scale
(scale, keys=('data', ), grad=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, **kwargs)[source][source]¶ Bases:
rising.transforms.affine.BaseAffine
Class Performing a Scale-Only Affine Transformation on a given sample dict. The transformation will be applied to all the dict-entries specified in
keys
.- Parameters
scale (
Union
[int
,Sequence
[int
],float
,Sequence
[float
],Tensor
,AbstractParameter
,Sequence
[AbstractParameter
]]) – torch.Tensor, int, float, optional the scale factor(s). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a scaling factor of 1keys (
Sequence
) – Sequence keys which should be augmentedgrad (
bool
) – bool enable gradient computation inside transformationdegree – bool whether the given rotation(s) are in degrees. Only valid for rotation parameters, which aren’t passed as full transformation matrix.
output_size (
Optional
[tuple
]) – Iterable if given, this will be the resulting image size. Defaults toNone
adjust_size (
bool
) – bool if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.interpolation_mode (
str
) – str interpolation mode to calculate output values ‘bilinear’ | ‘nearest’. Default: ‘bilinear’padding_mode (
str
) – padding mode for outside grid values ‘zeros’ | ‘border’ | ‘reflection’. Default: ‘zeros’align_corners (
bool
) – bool Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.reverse_order (
bool
) – bool reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]**kwargs – additional keyword arguments passed to the affine transform
-
class
rising.transforms.affine.
Translate
(translation, keys=('data', ), grad=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, unit='pixel', reverse_order=False, **kwargs)[source][source]¶ Bases:
rising.transforms.affine.BaseAffine
Class Performing an Translation-Only Affine Transformation on a given sample dict. The transformation will be applied to all the dict-entries specified in
keys
.- Parameters
translation (
Union
[int
,Sequence
[int
],float
,Sequence
[float
],Tensor
,AbstractParameter
,Sequence
[AbstractParameter
]]) – torch.Tensor, int, float the translation offset(s) relative to image (should be in the range [0, 1]). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a translation offset of 0keys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformationoutput_size (
Optional
[tuple
]) – if given, this will be the resulting image size. Defaults toNone
adjust_size (
bool
) – if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.interpolation_mode (
str
) – interpolation mode to calculate output values'bilinear'
|'nearest'
. Default:'bilinear'
padding_mode (
str
) – padding mode for outside grid values'zeros'
|'border'
|'reflection'
. Default:'zeros'
align_corners (
bool
) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.unit (
str
) – defines the unit of the translation. Either`relative'
to the image size or in`pixel'
reverse_order (
bool
) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]**kwargs – additional keyword arguments passed to the affine transform
-
assemble_matrix
(**data)[source][source]¶ Assembles the matrix (and takes care of batching and having it on the right device and in the correct dtype and dimensionality).
- Parameters
**data – the data to be transformed. Will be used to determine batchsize, dimensionality, dtype and device
- Returns
the (batched) transformation matrix [N, NDIM, NDIM]
- Return type
-
class
rising.transforms.affine.
Resize
(size, keys=('data', ), grad=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, **kwargs)[source][source]¶ Bases:
rising.transforms.affine.Scale
Class Performing a Resizing Affine Transformation on a given sample dict. The transformation will be applied to all the dict-entries specified in
keys
.- Parameters
size (
Union
[int
,Tuple
[int
]]) – the target size. If int, this will be repeated for all the dimensionskeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformationinterpolation_mode (
str
) – nterpolation mode to calculate output values ‘bilinear’ | ‘nearest’. Default: ‘bilinear’padding_mode (
str
) – padding mode for outside grid values ‘zeros’ | ‘border’ | ‘reflection’. Default: ‘zeros’align_corners (
bool
) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.reverse_order (
bool
) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]**kwargs – additional keyword arguments passed to the affine transform
Notes
The offsets for shifting back and to origin are calculated on the entry matching the first item iin
keys
for each batch
Affine¶
-
class
rising.transforms.affine.
Affine
(matrix=None, keys=('data', ), grad=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, per_sample=True, **kwargs)[source][source] Bases:
rising.transforms.abstract.BaseTransform
Class Performing an Affine Transformation on a given sample dict. The transformation will be applied to all the dict-entries specified in
keys
.- Parameters
matrix (
Union
[Tensor
,Sequence
[Sequence
[float
]],None
]) – if given, overwrites the parameters forscale
, :attr:rotation` andtranslation
. Should be a matrix of shape [(BATCHSIZE,) NDIM, NDIM(+1)] This matrix represents the whole transformation matrixkeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformationoutput_size (
Optional
[tuple
]) – if given, this will be the resulting image size. Defaults toNone
adjust_size (
bool
) – if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.interpolation_mode (
str
) – interpolation mode to calculate output values'bilinear'
|'nearest'
. Default:'bilinear'
padding_mode (
str
) – padding mode for outside grid values'zeros
’ |'border'
|'reflection'
. Default:'zeros'
align_corners (
bool
) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.reverse_order (
bool
) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]per_sample (
bool
) – sample different values for each element in the batch. The transform is still applied in a batched wise fashion.**kwargs – additional keyword arguments passed to the affine transform
-
assemble_matrix
(**data)[source][source] Assembles the matrix (and takes care of batching and having it on the right device and in the correct dtype and dimensionality).
- Parameters
**data – the data to be transformed. Will be used to determine batchsize, dimensionality, dtype and device
- Returns
the (batched) transformation matrix
- Return type
StackedAffine¶
-
class
rising.transforms.affine.
StackedAffine
(*transforms, keys=('data', ), grad=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, **kwargs)[source][source] Bases:
rising.transforms.affine.Affine
Class to stack multiple affines with dynamic ensembling by matrix multiplication to avoid multiple interpolations.
- Parameters
transforms (
Union
[Affine
,Sequence
[Union
[Sequence
[Affine
],Affine
]]]) – the transforms to stack. Each transform must have a function calledassemble_matrix
, which is called to dynamically assemble stacked matrices. Afterwards these transformations are stacked by matrix-multiplication to only perform a single interpolationkeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformationoutput_size (
Optional
[tuple
]) – if given, this will be the resulting image size. Defaults toNone
adjust_size (
bool
) – if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.interpolation_mode (
str
) – interpolation mode to calculate output values'bilinear'
|'nearest'
. Default:'bilinear'
padding_mode (
str
) – padding mode for outside grid values'zeros'
|'border'
|'reflection'
. Default:'zeros'
align_corners (
bool
) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.reverse_order (
bool
) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]**kwargs – additional keyword arguments passed to the affine transform
BaseAffine¶
-
class
rising.transforms.affine.
BaseAffine
(scale=None, rotation=None, translation=None, degree=False, image_transform=True, keys=('data', ), grad=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, per_sample=True, **kwargs)[source][source] Bases:
rising.transforms.affine.Affine
Class performing a basic Affine Transformation on a given sample dict. The transformation will be applied to all the dict-entries specified in
keys
.- Parameters
scale (
Union
[int
,Sequence
[int
],float
,Sequence
[float
],Tensor
,AbstractParameter
,Sequence
[AbstractParameter
],None
]) – the scale factor(s). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a scaling factor of 1rotation (
Union
[int
,Sequence
[int
],float
,Sequence
[float
],Tensor
,AbstractParameter
,Sequence
[AbstractParameter
],None
]) – the rotation factor(s). The rotation is performed in consecutive order axis0 -> axis1 (-> axis 2). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a rotation angle of 0translation (
Union
[int
,Sequence
[int
],float
,Sequence
[float
],Tensor
,AbstractParameter
,Sequence
[AbstractParameter
],None
]) – torch.Tensor, int, float the translation offset(s) relative to image (should be in the range [0, 1]). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a translation offset of 0keys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformationdegree (
bool
) – whether the given rotation(s) are in degrees. Only valid for rotation parameters, which aren’t passed as full transformation matrix.output_size (
Optional
[tuple
]) – if given, this will be the resulting image size. Defaults toNone
adjust_size (
bool
) – if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.interpolation_mode (
str
) – interpolation mode to calculate output values'bilinear'
|'nearest'
. Default:'bilinear'
padding_mode (
str
) – padding mode for outside grid values'zeros'
|'border'
|'reflection'
. Default:'zeros'
align_corners (
bool
) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.reverse_order (
bool
) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]per_sample (
bool
) – sample different values for each element in the batch. The transform is still applied in a batched wise fashion.**kwargs – additional keyword arguments passed to the affine transform
-
assemble_matrix
(**data)[source][source] Assembles the matrix (and takes care of batching and having it on the right device and in the correct dtype and dimensionality).
- Parameters
**data – the data to be transformed. Will be used to determine batchsize, dimensionality, dtype and device
- Returns
the (batched) transformation matrix
- Return type
Rotate¶
-
class
rising.transforms.affine.
Rotate
(rotation, keys=('data', ), grad=False, degree=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, **kwargs)[source][source] Bases:
rising.transforms.affine.BaseAffine
Class Performing a Rotation-OnlyAffine Transformation on a given sample dict. The rotation is applied in consecutive order: rot axis 0 -> rot axis 1 -> rot axis 2 The transformation will be applied to all the dict-entries specified in
keys
.- Parameters
rotation (
Union
[int
,Sequence
[int
],float
,Sequence
[float
],Tensor
,AbstractParameter
,Sequence
[AbstractParameter
]]) – the rotation factor(s). The rotation is performed in consecutive order axis0 -> axis1 (-> axis 2). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples *None
will be treated as a rotation angle of 0keys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformationdegree (
bool
) – whether the given rotation(s) are in degrees. Only valid for rotation parameters, which aren’t passed as full transformation matrix.output_size (
Optional
[tuple
]) – if given, this will be the resulting image size. Defaults toNone
adjust_size (
bool
) – if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.interpolation_mode (
str
) – interpolation mode to calculate output values'bilinear'
|'nearest'
. Default:'bilinear'
padding_mode (
str
) – padding mode for outside grid values'zeros'
|'border'
|'reflection'
. Default:'zeros'
align_corners (
bool
) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.reverse_order (
bool
) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]**kwargs – additional keyword arguments passed to the affine transform
Translate¶
-
class
rising.transforms.affine.
Translate
(translation, keys=('data', ), grad=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, unit='pixel', reverse_order=False, **kwargs)[source][source] Bases:
rising.transforms.affine.BaseAffine
Class Performing an Translation-Only Affine Transformation on a given sample dict. The transformation will be applied to all the dict-entries specified in
keys
.- Parameters
translation (
Union
[int
,Sequence
[int
],float
,Sequence
[float
],Tensor
,AbstractParameter
,Sequence
[AbstractParameter
]]) – torch.Tensor, int, float the translation offset(s) relative to image (should be in the range [0, 1]). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a translation offset of 0keys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformationoutput_size (
Optional
[tuple
]) – if given, this will be the resulting image size. Defaults toNone
adjust_size (
bool
) – if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.interpolation_mode (
str
) – interpolation mode to calculate output values'bilinear'
|'nearest'
. Default:'bilinear'
padding_mode (
str
) – padding mode for outside grid values'zeros'
|'border'
|'reflection'
. Default:'zeros'
align_corners (
bool
) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.unit (
str
) – defines the unit of the translation. Either`relative'
to the image size or in`pixel'
reverse_order (
bool
) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]**kwargs – additional keyword arguments passed to the affine transform
-
assemble_matrix
(**data)[source][source] Assembles the matrix (and takes care of batching and having it on the right device and in the correct dtype and dimensionality).
- Parameters
**data – the data to be transformed. Will be used to determine batchsize, dimensionality, dtype and device
- Returns
the (batched) transformation matrix [N, NDIM, NDIM]
- Return type
Scale¶
-
class
rising.transforms.affine.
Scale
(scale, keys=('data', ), grad=False, output_size=None, adjust_size=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, **kwargs)[source][source] Bases:
rising.transforms.affine.BaseAffine
Class Performing a Scale-Only Affine Transformation on a given sample dict. The transformation will be applied to all the dict-entries specified in
keys
.- Parameters
scale (
Union
[int
,Sequence
[int
],float
,Sequence
[float
],Tensor
,AbstractParameter
,Sequence
[AbstractParameter
]]) – torch.Tensor, int, float, optional the scale factor(s). Supported are: * a single parameter (as float or int), which will be replicated for all dimensions and batch samples * a parameter per dimension, which will be replicated for all batch samples * None will be treated as a scaling factor of 1keys (
Sequence
) – Sequence keys which should be augmentedgrad (
bool
) – bool enable gradient computation inside transformationdegree – bool whether the given rotation(s) are in degrees. Only valid for rotation parameters, which aren’t passed as full transformation matrix.
output_size (
Optional
[tuple
]) – Iterable if given, this will be the resulting image size. Defaults toNone
adjust_size (
bool
) – bool if True, the resulting image size will be calculated dynamically to ensure that the whole image fits.interpolation_mode (
str
) – str interpolation mode to calculate output values ‘bilinear’ | ‘nearest’. Default: ‘bilinear’padding_mode (
str
) – padding mode for outside grid values ‘zeros’ | ‘border’ | ‘reflection’. Default: ‘zeros’align_corners (
bool
) – bool Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.reverse_order (
bool
) – bool reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]**kwargs – additional keyword arguments passed to the affine transform
Resize¶
-
class
rising.transforms.affine.
Resize
(size, keys=('data', ), grad=False, interpolation_mode='bilinear', padding_mode='zeros', align_corners=False, reverse_order=False, **kwargs)[source][source] Bases:
rising.transforms.affine.Scale
Class Performing a Resizing Affine Transformation on a given sample dict. The transformation will be applied to all the dict-entries specified in
keys
.- Parameters
size (
Union
[int
,Tuple
[int
]]) – the target size. If int, this will be repeated for all the dimensionskeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformationinterpolation_mode (
str
) – nterpolation mode to calculate output values ‘bilinear’ | ‘nearest’. Default: ‘bilinear’padding_mode (
str
) – padding mode for outside grid values ‘zeros’ | ‘border’ | ‘reflection’. Default: ‘zeros’align_corners (
bool
) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic.reverse_order (
bool
) – reverses the coordinate order of the transformation to conform to the pytorch convention: transformation params order [W,H(,D)] and batch order [(D,)H,W]**kwargs – additional keyword arguments passed to the affine transform
Notes
The offsets for shifting back and to origin are calculated on the entry matching the first item iin
keys
for each batch
Channel Transforms¶
-
class
rising.transforms.channel.
OneHot
(num_classes, keys=('seg', ), dtype=None, grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.BaseTransform
Convert to one hot encoding. One hot encoding is applied in first dimension which results in shape N x NumClasses x [same as input] while input is expected to have shape N x 1 x [arbitrary additional dimensions]
- Parameters
num_classes (
int
) – number of classes. Ifnum_classes
is None, the number of classes is automatically determined from the current batch (by using the max of the current batch and assuming a consecutive order from zero)dtype (
Optional
[dtype
]) – optionally changes the dtype of the onehot encodingkeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to
one_hot_batch()
Warning
Input tensor needs to be of type torch.long. This could be achieved by applying TenorOp(“long”, keys=(“seg”,)).
-
class
rising.transforms.channel.
ArgMax
(dim, keepdim=True, keys=('seg', ), grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.BaseTransform
Compute argmax along given dimension. Can be used to revert OneHot encoding.
- Parameters
dim (
int
) – dimension to apply argmaxkeepdim (
bool
) – whether the output tensor has dim retained or notdtype – optionally changes the dtype of the onehot encoding
keys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to
one_hot_batch()
- Warnings
The output of the argmax function is always a tensor of dtype long.
OneHot¶
-
class
rising.transforms.channel.
OneHot
(num_classes, keys=('seg', ), dtype=None, grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.BaseTransform
Convert to one hot encoding. One hot encoding is applied in first dimension which results in shape N x NumClasses x [same as input] while input is expected to have shape N x 1 x [arbitrary additional dimensions]
- Parameters
num_classes (
int
) – number of classes. Ifnum_classes
is None, the number of classes is automatically determined from the current batch (by using the max of the current batch and assuming a consecutive order from zero)dtype (
Optional
[dtype
]) – optionally changes the dtype of the onehot encodingkeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to
one_hot_batch()
Warning
Input tensor needs to be of type torch.long. This could be achieved by applying TenorOp(“long”, keys=(“seg”,)).
ArgMax¶
-
class
rising.transforms.channel.
ArgMax
(dim, keepdim=True, keys=('seg', ), grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.BaseTransform
Compute argmax along given dimension. Can be used to revert OneHot encoding.
- Parameters
dim (
int
) – dimension to apply argmaxkeepdim (
bool
) – whether the output tensor has dim retained or notdtype – optionally changes the dtype of the onehot encoding
keys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to
one_hot_batch()
- Warnings
The output of the argmax function is always a tensor of dtype long.
Cropping Transforms¶
-
class
rising.transforms.crop.
CenterCrop
(size, keys=('data', ), grad=False, **kwargs)[source][source]¶
-
class
rising.transforms.crop.
RandomCrop
(size, dist=0, keys=('data', ), grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.BaseTransform
- Parameters
size (
Union
[int
,Sequence
,AbstractParameter
]) – size of cropdist (
Union
[int
,Sequence
,AbstractParameter
]) – minimum distance to border. By default zerokeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to augment_fn
CenterCrop¶
RandomCrop¶
-
class
rising.transforms.crop.
RandomCrop
(size, dist=0, keys=('data', ), grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.BaseTransform
- Parameters
size (
Union
[int
,Sequence
,AbstractParameter
]) – size of cropdist (
Union
[int
,Sequence
,AbstractParameter
]) – minimum distance to border. By default zerokeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to augment_fn
Format Transforms¶
-
class
rising.transforms.format.
MapToSeq
(*keys, grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.AbstractTransform
Convert dict to sequence
- Parameters
keys – keys which are mapped into sequence.
grad (
bool
) – enable gradient computation inside transformationkwargs (**) – additional keyword arguments passed to superclass
-
class
rising.transforms.format.
SeqToMap
(*keys, grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.AbstractTransform
Convert sequence to dict
- Parameters
keys – keys which are mapped into dict.
grad (
bool
) – enable gradient computation inside transformation**kwargs – additional keyword arguments passed to superclass
-
class
rising.transforms.format.
PopKeys
(keys, return_popped=False)[source][source]¶ Bases:
rising.transforms.abstract.AbstractTransform
Pops keys from a given data dict
- Parameters
-
class
rising.transforms.format.
FilterKeys
(keys, return_popped=False)[source][source]¶ Bases:
rising.transforms.abstract.AbstractTransform
Filters keys from a given data dict
- Parameters
-
class
rising.transforms.format.
RenameKeys
(keys)[source][source]¶ Bases:
rising.transforms.abstract.AbstractTransform
Rename keys inside batch
MapToSeq¶
-
class
rising.transforms.format.
MapToSeq
(*keys, grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.AbstractTransform
Convert dict to sequence
- Parameters
keys – keys which are mapped into sequence.
grad (
bool
) – enable gradient computation inside transformationkwargs (**) – additional keyword arguments passed to superclass
SeqToMap¶
-
class
rising.transforms.format.
SeqToMap
(*keys, grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.AbstractTransform
Convert sequence to dict
- Parameters
keys – keys which are mapped into dict.
grad (
bool
) – enable gradient computation inside transformation**kwargs – additional keyword arguments passed to superclass
PopKeys¶
-
class
rising.transforms.format.
PopKeys
(keys, return_popped=False)[source][source] Bases:
rising.transforms.abstract.AbstractTransform
Pops keys from a given data dict
- Parameters
FilterKeys¶
-
class
rising.transforms.format.
FilterKeys
(keys, return_popped=False)[source][source] Bases:
rising.transforms.abstract.AbstractTransform
Filters keys from a given data dict
- Parameters
Intensity Transforms¶
-
class
rising.transforms.intensity.
Clamp
(min, max, keys=('data', ), grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.BaseTransform
Apply augment_fn to keys
- Parameters
min (
Union
[float
,AbstractParameter
]) – minimal valuemax (
Union
[float
,AbstractParameter
]) – maximal valuekeys (
Sequence
) – the keys corresponding to the values to clampgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to augment_fn
-
class
rising.transforms.intensity.
NormRange
(min, max, keys=('data', ), per_channel=True, grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.PerSampleTransform
- Parameters
min (
Union
[float
,AbstractParameter
]) – minimal valuemax (
Union
[float
,AbstractParameter
]) – maximal valuekeys (
Sequence
) – keys to normalizeper_channel (
bool
) – normalize per channelgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to normalization function
-
class
rising.transforms.intensity.
NormMinMax
(keys=('data', ), per_channel=True, grad=False, eps=1e-08, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.PerSampleTransform
Norm to [0, 1]
- Parameters
keys (
Sequence
) – keys to normalizeper_channel (
bool
) – normalize per channelgrad (
bool
) – enable gradient computation inside transformationeps (
Optional
[float
]) – small constant for numerical stability. If None, no factor constant will be added**kwargs – keyword arguments passed to normalization function
-
class
rising.transforms.intensity.
NormZeroMeanUnitStd
(keys=('data', ), per_channel=True, grad=False, eps=1e-08, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.PerSampleTransform
Normalize mean to zero and std to one
- Parameters
keys (
Sequence
) – keys to normalizeper_channel (
bool
) – normalize per channelgrad (
bool
) – enable gradient computation inside transformationeps (
Optional
[float
]) – small constant for numerical stability. If None, no factor constant will be added**kwargs – keyword arguments passed to normalization function
-
class
rising.transforms.intensity.
NormMeanStd
(mean, std, keys=('data', ), per_channel=True, grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.PerSampleTransform
Normalize mean and std with provided values
- Parameters
mean (
Union
[float
,Sequence
[float
]]) – used for mean normalizationstd (
Union
[float
,Sequence
[float
]]) – used for std normalizationper_channel (
bool
) – normalize per channelgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to normalization function
-
class
rising.transforms.intensity.
Noise
(noise_type, per_channel=False, keys=('data', ), grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.PerChannelTransform
Add noise to data
- Parameters
noise_type (
str
) – supports all inplace functions of atorch.Tensor
per_channel (
bool
) – enable transformation per channelkeys (
Sequence
) – keys to normalizegrad (
bool
) – enable gradient computation inside transformationkwargs – keyword arguments passed to noise function
See also
torch.Tensor.normal_()
,torch.Tensor.exponential_()
-
class
rising.transforms.intensity.
GaussianNoise
(mean, std, keys=('data', ), grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.intensity.Noise
Add gaussian noise to data
-
class
rising.transforms.intensity.
ExponentialNoise
(lambd, keys=('data', ), grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.intensity.Noise
Add exponential noise to data
-
class
rising.transforms.intensity.
GammaCorrection
(gamma, keys=('data', ), grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.BaseTransform
Apply Gamma correction
- Parameters
gamma (
Union
[float
,AbstractParameter
]) – define gammakeys (
Sequence
) – keys to normalizegrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to superclass
-
class
rising.transforms.intensity.
RandomValuePerChannel
(augment_fn, random_sampler, per_channel=False, keys=('data', ), grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.PerChannelTransform
Apply augmentations which take random values as input by keyword
value
- Parameters
augment_fn (
callable
) – augmentation functionrandom_mode – specifies distribution which should be used to sample additive value. All function from python’s random module are supported
random_args – positional arguments passed for random function
per_channel (
bool
) – enable transformation per channelkeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to augment_fn
-
class
rising.transforms.intensity.
RandomAddValue
(random_sampler, per_channel=False, keys=('data', ), grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.intensity.RandomValuePerChannel
Increase values additively
- Parameters
random_sampler (
AbstractParameter
) – specify values to addper_channel (
bool
) – enable transformation per channelkeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to augment_fn
-
class
rising.transforms.intensity.
RandomScaleValue
(random_sampler, per_channel=False, keys=('data', ), grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.intensity.RandomValuePerChannel
Scale Values
- Parameters
random_sampler (
AbstractParameter
) – specify values to addper_channel (
bool
) – enable transformation per channelkeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to augment_fn
Clamp¶
-
class
rising.transforms.intensity.
Clamp
(min, max, keys=('data', ), grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.BaseTransform
Apply augment_fn to keys
- Parameters
min (
Union
[float
,AbstractParameter
]) – minimal valuemax (
Union
[float
,AbstractParameter
]) – maximal valuekeys (
Sequence
) – the keys corresponding to the values to clampgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to augment_fn
NormRange¶
-
class
rising.transforms.intensity.
NormRange
(min, max, keys=('data', ), per_channel=True, grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.PerSampleTransform
- Parameters
min (
Union
[float
,AbstractParameter
]) – minimal valuemax (
Union
[float
,AbstractParameter
]) – maximal valuekeys (
Sequence
) – keys to normalizeper_channel (
bool
) – normalize per channelgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to normalization function
NormMinMax¶
-
class
rising.transforms.intensity.
NormMinMax
(keys=('data', ), per_channel=True, grad=False, eps=1e-08, **kwargs)[source][source] Bases:
rising.transforms.abstract.PerSampleTransform
Norm to [0, 1]
- Parameters
keys (
Sequence
) – keys to normalizeper_channel (
bool
) – normalize per channelgrad (
bool
) – enable gradient computation inside transformationeps (
Optional
[float
]) – small constant for numerical stability. If None, no factor constant will be added**kwargs – keyword arguments passed to normalization function
NormZeroMeanUnitStd¶
-
class
rising.transforms.intensity.
NormZeroMeanUnitStd
(keys=('data', ), per_channel=True, grad=False, eps=1e-08, **kwargs)[source][source] Bases:
rising.transforms.abstract.PerSampleTransform
Normalize mean to zero and std to one
- Parameters
keys (
Sequence
) – keys to normalizeper_channel (
bool
) – normalize per channelgrad (
bool
) – enable gradient computation inside transformationeps (
Optional
[float
]) – small constant for numerical stability. If None, no factor constant will be added**kwargs – keyword arguments passed to normalization function
NormMeanStd¶
-
class
rising.transforms.intensity.
NormMeanStd
(mean, std, keys=('data', ), per_channel=True, grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.PerSampleTransform
Normalize mean and std with provided values
- Parameters
mean (
Union
[float
,Sequence
[float
]]) – used for mean normalizationstd (
Union
[float
,Sequence
[float
]]) – used for std normalizationper_channel (
bool
) – normalize per channelgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to normalization function
Noise¶
-
class
rising.transforms.intensity.
Noise
(noise_type, per_channel=False, keys=('data', ), grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.PerChannelTransform
Add noise to data
- Parameters
noise_type (
str
) – supports all inplace functions of atorch.Tensor
per_channel (
bool
) – enable transformation per channelkeys (
Sequence
) – keys to normalizegrad (
bool
) – enable gradient computation inside transformationkwargs – keyword arguments passed to noise function
See also
torch.Tensor.normal_()
,torch.Tensor.exponential_()
GaussianNoise¶
-
class
rising.transforms.intensity.
GaussianNoise
(mean, std, keys=('data', ), grad=False, **kwargs)[source][source] Bases:
rising.transforms.intensity.Noise
Add gaussian noise to data
ExponentialNoise¶
-
class
rising.transforms.intensity.
ExponentialNoise
(lambd, keys=('data', ), grad=False, **kwargs)[source][source] Bases:
rising.transforms.intensity.Noise
Add exponential noise to data
GammaCorrection¶
-
class
rising.transforms.intensity.
GammaCorrection
(gamma, keys=('data', ), grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.BaseTransform
Apply Gamma correction
- Parameters
gamma (
Union
[float
,AbstractParameter
]) – define gammakeys (
Sequence
) – keys to normalizegrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to superclass
RandomValuePerChannel¶
-
class
rising.transforms.intensity.
RandomValuePerChannel
(augment_fn, random_sampler, per_channel=False, keys=('data', ), grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.PerChannelTransform
Apply augmentations which take random values as input by keyword
value
- Parameters
augment_fn (
callable
) – augmentation functionrandom_mode – specifies distribution which should be used to sample additive value. All function from python’s random module are supported
random_args – positional arguments passed for random function
per_channel (
bool
) – enable transformation per channelkeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to augment_fn
RandomAddValue¶
-
class
rising.transforms.intensity.
RandomAddValue
(random_sampler, per_channel=False, keys=('data', ), grad=False, **kwargs)[source][source] Bases:
rising.transforms.intensity.RandomValuePerChannel
Increase values additively
- Parameters
random_sampler (
AbstractParameter
) – specify values to addper_channel (
bool
) – enable transformation per channelkeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to augment_fn
RandomScaleValue¶
-
class
rising.transforms.intensity.
RandomScaleValue
(random_sampler, per_channel=False, keys=('data', ), grad=False, **kwargs)[source][source] Bases:
rising.transforms.intensity.RandomValuePerChannel
Scale Values
- Parameters
random_sampler (
AbstractParameter
) – specify values to addper_channel (
bool
) – enable transformation per channelkeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to augment_fn
Kernel Transforms¶
-
class
rising.transforms.kernel.
KernelTransform
(in_channels, kernel_size, dim=2, stride=1, padding=0, padding_mode='zero', keys=('data', ), grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.AbstractTransform
Baseclass for kernel based transformations (kernel is applied to each channel individually)
- Parameters
in_channels (
int
) – number of input channelsdim (
int
) – number of spatial dimensionspadding_mode (
str
) – padding mode for input. Supports all modes fromtorch.functional.pad()
exceptcircular
keys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformationkwargs – keyword arguments passed to superclass
See also
torch.functional.pad()
-
class
rising.transforms.kernel.
GaussianSmoothing
(in_channels, kernel_size, std, dim=2, stride=1, padding=0, padding_mode='reflect', keys=('data', ), grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.kernel.KernelTransform
Perform Gaussian Smoothing. Filtering is performed seperately for each channel in the input using a depthwise convolution. This code is adapted from: ‘https://discuss.pytorch.org/t/is-there-anyway-to-do-‘ ‘gaussian-filtering-for-an-image-2d-3d-in-pytorch/12351/10’
- Parameters
in_channels (
int
) – number of input channelsdim (
int
) – number of spatial dimensionspadding_mode (
str
) – padding mode for input. Supports all modes fromtorch.functional.pad()
exceptcircular
keys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to superclass
See also
torch.functional.pad()
KernelTransform¶
-
class
rising.transforms.kernel.
KernelTransform
(in_channels, kernel_size, dim=2, stride=1, padding=0, padding_mode='zero', keys=('data', ), grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.AbstractTransform
Baseclass for kernel based transformations (kernel is applied to each channel individually)
- Parameters
in_channels (
int
) – number of input channelsdim (
int
) – number of spatial dimensionspadding_mode (
str
) – padding mode for input. Supports all modes fromtorch.functional.pad()
exceptcircular
keys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformationkwargs – keyword arguments passed to superclass
See also
torch.functional.pad()
GaussianSmoothing¶
-
class
rising.transforms.kernel.
GaussianSmoothing
(in_channels, kernel_size, std, dim=2, stride=1, padding=0, padding_mode='reflect', keys=('data', ), grad=False, **kwargs)[source][source] Bases:
rising.transforms.kernel.KernelTransform
Perform Gaussian Smoothing. Filtering is performed seperately for each channel in the input using a depthwise convolution. This code is adapted from: ‘https://discuss.pytorch.org/t/is-there-anyway-to-do-‘ ‘gaussian-filtering-for-an-image-2d-3d-in-pytorch/12351/10’
- Parameters
in_channels (
int
) – number of input channelsdim (
int
) – number of spatial dimensionspadding_mode (
str
) – padding mode for input. Supports all modes fromtorch.functional.pad()
exceptcircular
keys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to superclass
See also
torch.functional.pad()
Spatial Transforms¶
-
class
rising.transforms.spatial.
Mirror
(dims, keys=('data', ), grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.BaseTransform
Random mirror transform
- Parameters
dims (
Union
[int
,DiscreteParameter
,Sequence
[Union
[int
,DiscreteParameter
]]]) – axes which should be mirroredprob – probability for mirror. If float value is provided, it is used for all dims
grad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to superclass
Examples
>>> # Use mirror transform for augmentations >>> from rising.random import DiscreteCombinationsParameter >>> # We sample from all possible mirror combination for >>> # volumetric data >>> trafo = Mirror(DiscreteCombinationsParameter((0, 1, 2)))
-
class
rising.transforms.spatial.
Rot90
(dims, keys=('data', ), num_rots=(0, 1, 2, 3), prob=0.5, grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.AbstractTransform
Rotate 90 degree around dims
- Parameters
dims (
Union
[Sequence
[int
],DiscreteParameter
]) – dims/axis ro rotate. If more than two dims are provided, 2 dimensions are randomly chosen at each callnum_rots (
Sequence
[int
]) – possible values for number of rotationsprob (
float
) – probability for rotationgrad (
bool
) – enable gradient computation inside transformationkwargs – keyword arguments passed to superclass
See also
torch.Tensor.rot90()
-
class
rising.transforms.spatial.
ResizeNative
(size, mode='nearest', align_corners=None, preserve_range=False, keys=('data', ), grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.BaseTransform
Resize data to given size
- Parameters
size (
Union
[int
,Sequence
[int
]]) – spatial output size (excluding batch size and number of channels)mode (
str
) – one ofnearest
,linear
,bilinear
,bicubic
,trilinear
,area
(for more inforamtion seetorch.nn.functional.interpolate()
)align_corners (
Optional
[bool
]) – input and output tensors are aligned by the center points of their corners pixels, preserving the values at the corner pixels.preserve_range (
bool
) – output tensor has same range as input tensorkeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to augment_fn
-
class
rising.transforms.spatial.
Zoom
(scale_factor=(0.75, 1.25), mode='nearest', align_corners=None, preserve_range=False, keys=('data', ), grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.BaseTransform
Apply augment_fn to keys. By default the scaling factor is sampled from a uniform distribution with the range specified by
random_args
- Parameters
scale_factor (
Union
[Sequence
,AbstractParameter
]) – positional arguments passed for random function. If Sequence[Sequence] is provided, a random value for each item in the outer Sequence is generated. This can be used to set different ranges for different axis.mode (
str
) – one of nearest, linear, bilinear, bicubic, trilinear, area (for more inforamtion seetorch.nn.functional.interpolate()
)align_corners (
Optional
[bool
]) – input and output tensors are aligned by the center points of their corners pixels, preserving the values at the corner pixels.preserve_range (
bool
) – output tensor has same range as input tensorkeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to augment_fn
-
class
rising.transforms.spatial.
ProgressiveResize
(scheduler, mode='nearest', align_corners=None, preserve_range=False, keys=('data', ), grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.spatial.ResizeNative
Resize data to sizes specified by scheduler
- Parameters
scheduler (
Callable
[[int
],Union
[int
,Sequence
[int
]]]) – scheduler which determined the current size. The scheduler is called with the current iteration of the transformmode (
str
) – one ofnearest
,linear
,bilinear
,bicubic
,trilinear
,area
(for more inforamtion seetorch.nn.functional.interpolate()
)align_corners (
Optional
[bool
]) – input and output tensors are aligned by the center points of their corners pixels, preserving the values at the corner pixels.preserve_range (
bool
) – output tensor has same range as input tensorkeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to augment_fn
Warning
When this transformations is used in combination with multiprocessing, the step counter is not perfectly synchronized between multiple processes. As a result the step count my jump between values in a range of the number of processes used.
-
forward
(**data)[source][source]¶ Resize data
- Parameters
**data – input batch
- Returns
augmented batch
- Return type
-
class
rising.transforms.spatial.
SizeStepScheduler
(milestones, sizes)[source][source]¶ Bases:
object
Scheduler return size when milestone is reached
- Parameters
Mirror¶
-
class
rising.transforms.spatial.
Mirror
(dims, keys=('data', ), grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.BaseTransform
Random mirror transform
- Parameters
dims (
Union
[int
,DiscreteParameter
,Sequence
[Union
[int
,DiscreteParameter
]]]) – axes which should be mirroredprob – probability for mirror. If float value is provided, it is used for all dims
grad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to superclass
Examples
>>> # Use mirror transform for augmentations >>> from rising.random import DiscreteCombinationsParameter >>> # We sample from all possible mirror combination for >>> # volumetric data >>> trafo = Mirror(DiscreteCombinationsParameter((0, 1, 2)))
Rot90¶
-
class
rising.transforms.spatial.
Rot90
(dims, keys=('data', ), num_rots=(0, 1, 2, 3), prob=0.5, grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.AbstractTransform
Rotate 90 degree around dims
- Parameters
dims (
Union
[Sequence
[int
],DiscreteParameter
]) – dims/axis ro rotate. If more than two dims are provided, 2 dimensions are randomly chosen at each callnum_rots (
Sequence
[int
]) – possible values for number of rotationsprob (
float
) – probability for rotationgrad (
bool
) – enable gradient computation inside transformationkwargs – keyword arguments passed to superclass
See also
torch.Tensor.rot90()
ResizeNative¶
-
class
rising.transforms.spatial.
ResizeNative
(size, mode='nearest', align_corners=None, preserve_range=False, keys=('data', ), grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.BaseTransform
Resize data to given size
- Parameters
size (
Union
[int
,Sequence
[int
]]) – spatial output size (excluding batch size and number of channels)mode (
str
) – one ofnearest
,linear
,bilinear
,bicubic
,trilinear
,area
(for more inforamtion seetorch.nn.functional.interpolate()
)align_corners (
Optional
[bool
]) – input and output tensors are aligned by the center points of their corners pixels, preserving the values at the corner pixels.preserve_range (
bool
) – output tensor has same range as input tensorkeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to augment_fn
Zoom¶
-
class
rising.transforms.spatial.
Zoom
(scale_factor=(0.75, 1.25), mode='nearest', align_corners=None, preserve_range=False, keys=('data', ), grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.BaseTransform
Apply augment_fn to keys. By default the scaling factor is sampled from a uniform distribution with the range specified by
random_args
- Parameters
scale_factor (
Union
[Sequence
,AbstractParameter
]) – positional arguments passed for random function. If Sequence[Sequence] is provided, a random value for each item in the outer Sequence is generated. This can be used to set different ranges for different axis.mode (
str
) – one of nearest, linear, bilinear, bicubic, trilinear, area (for more inforamtion seetorch.nn.functional.interpolate()
)align_corners (
Optional
[bool
]) – input and output tensors are aligned by the center points of their corners pixels, preserving the values at the corner pixels.preserve_range (
bool
) – output tensor has same range as input tensorkeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to augment_fn
ProgressiveResize¶
-
class
rising.transforms.spatial.
ProgressiveResize
(scheduler, mode='nearest', align_corners=None, preserve_range=False, keys=('data', ), grad=False, **kwargs)[source][source] Bases:
rising.transforms.spatial.ResizeNative
Resize data to sizes specified by scheduler
- Parameters
scheduler (
Callable
[[int
],Union
[int
,Sequence
[int
]]]) – scheduler which determined the current size. The scheduler is called with the current iteration of the transformmode (
str
) – one ofnearest
,linear
,bilinear
,bicubic
,trilinear
,area
(for more inforamtion seetorch.nn.functional.interpolate()
)align_corners (
Optional
[bool
]) – input and output tensors are aligned by the center points of their corners pixels, preserving the values at the corner pixels.preserve_range (
bool
) – output tensor has same range as input tensorkeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to augment_fn
Warning
When this transformations is used in combination with multiprocessing, the step counter is not perfectly synchronized between multiple processes. As a result the step count my jump between values in a range of the number of processes used.
-
forward
(**data)[source][source] Resize data
- Parameters
**data – input batch
- Returns
augmented batch
- Return type
Tensor Transforms¶
-
class
rising.transforms.tensor.
ToTensor
(keys=('data', ), grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.BaseTransform
Transform Input Collection to Collection of
torch.Tensor
-
class
rising.transforms.tensor.
ToDeviceDtype
(device=None, dtype=None, non_blocking=False, copy=False, keys=('data', ), grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.BaseTransform
Push data to device and convert to tdype
- Parameters
dtype (
Optional
[dtype
]) – target dtypenon_blocking (
bool
) – if True and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect.copy (
bool
) – create copy of datakeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to function
-
class
rising.transforms.tensor.
ToDevice
(device, non_blocking=False, copy=False, keys=('data', ), grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.tensor.ToDeviceDtype
Push data to device
- Parameters
non_blocking (
bool
) – if True and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect.copy (
bool
) – create copy of datakeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to function
-
class
rising.transforms.tensor.
ToDtype
(dtype, keys=('data', ), grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.tensor.ToDeviceDtype
Convert data to dtype
-
class
rising.transforms.tensor.
TensorOp
(op_name, *args, keys=('data', ), grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.BaseTransform
Apply function which are supported by the torch.Tensor class
-
class
rising.transforms.tensor.
Permute
(dims, grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.BaseTransform
Permute dimensions of tensor
- Parameters
ToTensor¶
-
class
rising.transforms.tensor.
ToTensor
(keys=('data', ), grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.BaseTransform
Transform Input Collection to Collection of
torch.Tensor
ToDeviceDtype¶
-
class
rising.transforms.tensor.
ToDeviceDtype
(device=None, dtype=None, non_blocking=False, copy=False, keys=('data', ), grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.BaseTransform
Push data to device and convert to tdype
- Parameters
dtype (
Optional
[dtype
]) – target dtypenon_blocking (
bool
) – if True and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect.copy (
bool
) – create copy of datakeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to function
ToDevice¶
-
class
rising.transforms.tensor.
ToDevice
(device, non_blocking=False, copy=False, keys=('data', ), grad=False, **kwargs)[source][source] Bases:
rising.transforms.tensor.ToDeviceDtype
Push data to device
- Parameters
non_blocking (
bool
) – if True and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect.copy (
bool
) – create copy of datakeys (
Sequence
) – keys which should be augmentedgrad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to function
ToDtype¶
-
class
rising.transforms.tensor.
ToDtype
(dtype, keys=('data', ), grad=False, **kwargs)[source][source] Bases:
rising.transforms.tensor.ToDeviceDtype
Convert data to dtype
TensorOp¶
-
class
rising.transforms.tensor.
TensorOp
(op_name, *args, keys=('data', ), grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.BaseTransform
Apply function which are supported by the torch.Tensor class
Permute¶
-
class
rising.transforms.tensor.
Permute
(dims, grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.BaseTransform
Permute dimensions of tensor
- Parameters
Utility Transforms¶
-
class
rising.transforms.utility.
DoNothing
(grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.AbstractTransform
Transform that returns the input as is
- Parameters
grad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to superclass
-
class
rising.transforms.utility.
SegToBox
(keys, grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.AbstractTransform
Convert instance segmentation to bounding boxes
- Parameters
-
class
rising.transforms.utility.
BoxToSeg
(keys, shape, dtype, device, grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.AbstractTransform
Convert bounding boxes to instance segmentation
- Parameters
keys (
Mapping
[Hashable
,Hashable
]) – the key specifies which item to use as the bounding boxes and the item specifies where the save the bounding boxesshape (
Sequence
[int
]) – spatial shape of output tensor (batchsize is derived from bounding boxes and has one channel)dtype (
dtype
) – dtype of segmentationgrad (
bool
) – enable gradient computation inside transformation**kwargs – Additional keyword arguments forwarded to the Base Class
-
class
rising.transforms.utility.
InstanceToSemantic
(keys, cls_key, grad=False, **kwargs)[source][source]¶ Bases:
rising.transforms.abstract.AbstractTransform
Convert an instance segmentation to a semantic segmentation
- Parameters
keys (
Mapping
[str
,str
]) – the key specifies which item to use as instance segmentation and the item specifies where the save the semantic segmentationcls_key (
Hashable
) – key where the class mapping is saved. Mapping needs to be a Sequence{Sequence[int]].grad (
bool
) – enable gradient computation inside transformation
DoNothing¶
-
class
rising.transforms.utility.
DoNothing
(grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.AbstractTransform
Transform that returns the input as is
- Parameters
grad (
bool
) – enable gradient computation inside transformation**kwargs – keyword arguments passed to superclass
SegToBox¶
-
class
rising.transforms.utility.
SegToBox
(keys, grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.AbstractTransform
Convert instance segmentation to bounding boxes
- Parameters
BoxToSeg¶
-
class
rising.transforms.utility.
BoxToSeg
(keys, shape, dtype, device, grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.AbstractTransform
Convert bounding boxes to instance segmentation
- Parameters
keys (
Mapping
[Hashable
,Hashable
]) – the key specifies which item to use as the bounding boxes and the item specifies where the save the bounding boxesshape (
Sequence
[int
]) – spatial shape of output tensor (batchsize is derived from bounding boxes and has one channel)dtype (
dtype
) – dtype of segmentationgrad (
bool
) – enable gradient computation inside transformation**kwargs – Additional keyword arguments forwarded to the Base Class
InstanceToSemantic¶
-
class
rising.transforms.utility.
InstanceToSemantic
(keys, cls_key, grad=False, **kwargs)[source][source] Bases:
rising.transforms.abstract.AbstractTransform
Convert an instance segmentation to a semantic segmentation
- Parameters
keys (
Mapping
[str
,str
]) – the key specifies which item to use as instance segmentation and the item specifies where the save the semantic segmentationcls_key (
Hashable
) – key where the class mapping is saved. Mapping needs to be a Sequence{Sequence[int]].grad (
bool
) – enable gradient computation inside transformation