Training with onnxruntime and pytorch¶
ORTModule¶
- class onnxruntime.training.ortmodule.ORTModule(module, debug_options=None)¶
Extends user’s
torch.nn.Module
model to leverage ONNX Runtime super fast training engine.ORTModule specializes the user’s
torch.nn.Module
model, providingforward()
,backward()
along with all otherstorch.nn.Module
’s APIs.- Parameters
module (torch.nn.Module) – User’s PyTorch module that ORTModule specializes
debug_options (
DebugOptions
, optional) – debugging options for ORTModule.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- __annotations__ = {'__call__': typing.Callable[..., typing.Any], '_is_full_backward_hook': typing.Optional[bool], '_version': <class 'int'>, 'dump_patches': <class 'bool'>, 'forward': typing.Callable[..., typing.Any], 'training': <class 'bool'>}¶
- __call__(*input, **kwargs)¶
Call self as a function.
- __delattr__(name)¶
Implement delattr(self, name).
- __dir__()¶
Default dir() implementation.
- __eq__(value, /)¶
Return self==value.
- __format__(format_spec, /)¶
Default object formatter.
- __ge__(value, /)¶
Return self>=value.
- __getattribute__(name, /)¶
Return getattr(self, name).
- __getstate__()¶
- __gt__(value, /)¶
Return self>value.
- __hash__()¶
Return hash(self).
- __init__(module, debug_options=None)¶
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- __init_subclass__()¶
This method is called when a class is subclassed.
The default implementation does nothing. It may be overridden to extend subclasses.
- __le__(value, /)¶
Return self<=value.
- __lt__(value, /)¶
Return self<value.
- __ne__(value, /)¶
Return self!=value.
- __new__(**kwargs)¶
- __reduce__()¶
Helper for pickle.
- __reduce_ex__(protocol, /)¶
Helper for pickle.
- __repr__()¶
Return repr(self).
- __setstate__(state)¶
- __sizeof__()¶
Size of object in memory, in bytes.
- __str__()¶
Return str(self).
- __subclasshook__()¶
Abstract classes can override this to customize issubclass().
This is invoked early on by abc.ABCMeta.__subclasscheck__(). It should return True, False or NotImplemented. If it returns NotImplemented, the normal algorithm is used. Otherwise, it overrides the normal algorithm (and the outcome is cached).
- _apply(fn)¶
Override original method to delegate execution to the flattened PyTorch user module
- _call_impl(*input, **kwargs)¶
- _get_backward_hooks()¶
Returns the backward hooks for use in the call function. It returns two lists, one with the full backward hooks and one with the non-full backward hooks.
- _get_name()¶
- _is_training()¶
- _load_from_state_dict(state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs)¶
Override original method to delegate execution to the original PyTorch user module
- _maybe_warn_non_full_backward_hook(inputs, result, grad_fn)¶
- _named_members(get_members_fn, prefix='', recurse=True)¶
Helper method for yielding various names + members of modules.
- _register_load_state_dict_pre_hook(hook, with_module=False)¶
These hooks will be called with arguments: state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs, before loading state_dict into self. These arguments are exactly the same as those of _load_from_state_dict.
If
with_module
isTrue
, then the first argument to the hook is an instance of the module.- Parameters
hook (Callable) – Callable hook that will be invoked before loading the state dict.
with_module (bool, optional) – Whether or not to pass the module instance to the hook as the first parameter.
- _register_state_dict_hook(hook)¶
These hooks will be called with arguments: self, state_dict, prefix, local_metadata, after the state_dict of self is set. Note that only parameters and buffers of self or its children are guaranteed to exist in state_dict. The hooks may modify state_dict inplace or return a new one.
- _replicate_for_data_parallel()¶
Raises a NotImplementedError exception since ORTModule is not compatible with torch.nn.DataParallel
torch.nn.DataParallel requires the model to be replicated across multiple devices, and in this process, ORTModule tries to export the model to onnx on multiple devices with the same sample input. Because of this multiple device export with the same sample input, torch throws an exception that reads: “RuntimeError: Input, output and indices must be on the current device” which can be vague to the user since they might not be aware of what happens behind the scene.
We therefore try to preemptively catch use of ORTModule with torch.nn.DataParallel and throw a more meaningful exception.
Users must use torch.nn.parallel.DistributedDataParallel instead of torch.nn.DataParallel which does not need model replication and is also recommended by torch to use instead.
- _save_to_state_dict(destination, prefix, keep_vars)¶
Saves module state to destination dictionary, containing a state of the module, but not its descendants. This is called on every submodule in
state_dict()
.In rare cases, subclasses can achieve class-specific behavior by overriding this method with custom logic.
- _slow_forward(*input, **kwargs)¶
- add_module(name: str, module: Optional[Module]) None ¶
Raises a ORTModuleTorchModelException exception since ORTModule does not support adding modules to it
- apply(fn: Callable[[Module], None]) onnxruntime.training.ortmodule.ortmodule.T ¶
Override
apply()
to delegate execution to ONNX Runtime
- bfloat16() torch.nn.modules.module.T ¶
Casts all floating point parameters and buffers to
bfloat16
datatype.Note
This method modifies the module in-place.
- Returns
self
- Return type
Module
- buffers(recurse: bool = True) Iterator[torch.Tensor] ¶
Override
buffers()
- children() Iterator[torch.nn.modules.module.Module] ¶
Returns an iterator over immediate children modules.
- Yields
Module – a child module
- cpu() torch.nn.modules.module.T ¶
Moves all model parameters and buffers to the CPU.
Note
This method modifies the module in-place.
- Returns
self
- Return type
Module
- cuda(device: Optional[Union[int, torch.device]] = None) torch.nn.modules.module.T ¶
Moves all model parameters and buffers to the GPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized.
Note
This method modifies the module in-place.
- Parameters
device (int, optional) – if specified, all parameters will be copied to that device
- Returns
self
- Return type
Module
- double() torch.nn.modules.module.T ¶
Casts all floating point parameters and buffers to
double
datatype.Note
This method modifies the module in-place.
- Returns
self
- Return type
Module
- eval() torch.nn.modules.module.T ¶
Sets the module in evaluation mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.This is equivalent with
self.train(False)
.See Locally disabling gradient computation for a comparison between .eval() and several similar mechanisms that may be confused with it.
- Returns
self
- Return type
Module
- extra_repr() str ¶
Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- float() torch.nn.modules.module.T ¶
Casts all floating point parameters and buffers to
float
datatype.Note
This method modifies the module in-place.
- Returns
self
- Return type
Module
- forward(*inputs, **kwargs)¶
Delegate the
forward()
pass of PyTorch training to ONNX Runtime.The first call to forward performs setup and checking steps. During this call, ORTModule determines whether the module can be trained with ONNX Runtime. For this reason, the first forward call execution takes longer than subsequent calls. Execution is interupted if ONNX Runtime cannot process the model for training.
- Parameters
positional (variable) –
positional –
keyword –
forward (and variable keyword arguments defined in the user's PyTorch module's) –
types. (method. Values can be torch tensors and primitive) –
- Returns
The output as expected from the forward method defined by the user’s PyTorch module. Output values supported include tensors, nested sequences of tensors and nested dictionaries of tensor values.
- get_buffer(target: str) torch.Tensor ¶
Override
get_buffer()
- get_extra_state() Any ¶
Returns any extra state to include in the module’s state_dict. Implement this and a corresponding
set_extra_state()
for your module if you need to store extra state. This function is called when building the module’s state_dict().Note that extra state should be pickleable to ensure working serialization of the state_dict. We only provide provide backwards compatibility guarantees for serializing Tensors; other objects may break backwards compatibility if their serialized pickled form changes.
- Returns
Any extra state to store in the module’s state_dict
- Return type
- get_parameter(target: str) torch.nn.parameter.Parameter ¶
Override
get_parameter()
- get_submodule(target: str) torch.nn.modules.module.Module ¶
Returns the submodule given by
target
if it exists, otherwise throws an error.For example, let’s say you have an
nn.Module
A
that looks like this:(The diagram shows an
nn.Module
A
.A
has a nested submodulenet_b
, which itself has two submodulesnet_c
andlinear
.net_c
then has a submoduleconv
.)To check whether or not we have the
linear
submodule, we would callget_submodule("net_b.linear")
. To check whether we have theconv
submodule, we would callget_submodule("net_b.net_c.conv")
.The runtime of
get_submodule
is bounded by the degree of module nesting intarget
. A query againstnamed_modules
achieves the same result, but it is O(N) in the number of transitive modules. So, for a simple check to see if some submodule exists,get_submodule
should always be used.- Parameters
target – The fully-qualified string name of the submodule to look for. (See above example for how to specify a fully-qualified string.)
- Returns
The submodule referenced by
target
- Return type
- Raises
AttributeError – If the target string references an invalid path or resolves to something that is not an
nn.Module
- half() torch.nn.modules.module.T ¶
Casts all floating point parameters and buffers to
half
datatype.Note
This method modifies the module in-place.
- Returns
self
- Return type
Module
- load_state_dict(state_dict: OrderedDict[str, Tensor], strict: bool = True)¶
Override
load_state_dict()
to delegate execution to ONNX Runtime
- property module¶
The original torch.nn.Module that this module wraps.
This property provides access to methods and properties on the original module.
- named_buffers(prefix: str = '', recurse: bool = True) Iterator[Tuple[str, torch.Tensor]] ¶
Override
named_buffers()
- named_children() Iterator[Tuple[str, Module]] ¶
Override
named_children()
- named_modules(*args, **kwargs)¶
Override
named_modules()
- named_parameters(prefix: str = '', recurse: bool = True) Iterator[Tuple[str, torch.nn.parameter.Parameter]] ¶
Override
named_parameters()
- parameters(recurse: bool = True) Iterator[torch.nn.parameter.Parameter] ¶
Override
parameters()
- register_backward_hook(hook: Callable[[torch.nn.modules.module.Module, Union[Tuple[torch.Tensor, ...], torch.Tensor], Union[Tuple[torch.Tensor, ...], torch.Tensor]], Union[None, torch.Tensor]]) torch.utils.hooks.RemovableHandle ¶
Registers a backward hook on the module.
This function is deprecated in favor of
register_full_backward_hook()
and the behavior of this function will change in future versions.- Returns
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type
torch.utils.hooks.RemovableHandle
- register_buffer(name: str, tensor: Optional[torch.Tensor], persistent: bool = True) None ¶
Override
register_buffer()
- register_forward_hook(hook: Callable[[...], None]) torch.utils.hooks.RemovableHandle ¶
Registers a forward hook on the module.
The hook will be called every time after
forward()
has computed an output. It should have the following signature:hook(module, input, output) -> None or modified output
The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the
forward
. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called afterforward()
is called.- Returns
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type
torch.utils.hooks.RemovableHandle
- register_forward_pre_hook(hook: Callable[[...], None]) torch.utils.hooks.RemovableHandle ¶
Registers a forward pre-hook on the module.
The hook will be called every time before
forward()
is invoked. It should have the following signature:hook(module, input) -> None or modified input
The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the
forward
. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned(unless that value is already a tuple).- Returns
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type
torch.utils.hooks.RemovableHandle
- register_full_backward_hook(hook: Callable[[torch.nn.modules.module.Module, Union[Tuple[torch.Tensor, ...], torch.Tensor], Union[Tuple[torch.Tensor, ...], torch.Tensor]], Union[None, torch.Tensor]]) torch.utils.hooks.RemovableHandle ¶
Registers a backward hook on the module.
The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature:
hook(module, grad_input, grad_output) -> tuple(Tensor) or None
The
grad_input
andgrad_output
are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place ofgrad_input
in subsequent computations.grad_input
will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries ingrad_input
andgrad_output
will beNone
for all non-Tensor arguments.For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function.
Warning
Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error.
- Returns
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type
torch.utils.hooks.RemovableHandle
- register_parameter(name: str, param: Optional[torch.nn.parameter.Parameter]) None ¶
Override
register_parameter()
- requires_grad_(requires_grad: bool = True) torch.nn.modules.module.T ¶
Change if autograd should record operations on parameters in this module.
This method sets the parameters’
requires_grad
attributes in-place.This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training).
See Locally disabling gradient computation for a comparison between .requires_grad_() and several similar mechanisms that may be confused with it.
- Parameters
requires_grad (bool) – whether autograd should record operations on parameters in this module. Default:
True
.- Returns
self
- Return type
Module
- set_extra_state(state: Any)¶
This function is called from
load_state_dict()
to handle any extra state found within the state_dict. Implement this function and a correspondingget_extra_state()
for your module if you need to store extra state within its state_dict.- Parameters
state (dict) – Extra state from the state_dict
- state_dict(destination=None, prefix='', keep_vars=False)¶
Override
state_dict()
to delegate execution to ONNX Runtime
- to(*args, **kwargs)¶
Moves and/or casts the parameters and buffers.
This can be called as
- to(device=None, dtype=None, non_blocking=False)
- to(dtype, non_blocking=False)
- to(tensor, non_blocking=False)
- to(memory_format=torch.channels_last)
Its signature is similar to
torch.Tensor.to()
, but only accepts floating point or complexdtype
s. In addition, this method will only cast the floating point or complex parameters and buffers todtype
(if given). The integral parameters and buffers will be moveddevice
, if that is given, but with dtypes unchanged. Whennon_blocking
is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices.See below for examples.
Note
This method modifies the module in-place.
- Parameters
device (
torch.device
) – the desired device of the parameters and buffers in this moduledtype (
torch.dtype
) – the desired floating point or complex dtype of the parameters and buffers in this moduletensor (torch.Tensor) – Tensor whose dtype and device are the desired dtype and device for all parameters and buffers in this module
memory_format (
torch.memory_format
) – the desired memory format for 4D parameters and buffers in this module (keyword only argument)
- Returns
self
- Return type
Module
Examples:
>>> linear = nn.Linear(2, 2) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]]) >>> linear.to(torch.double) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]], dtype=torch.float64) >>> gpu1 = torch.device("cuda:1") >>> linear.to(gpu1, dtype=torch.half, non_blocking=True) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1') >>> cpu = torch.device("cpu") >>> linear.to(cpu) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16) >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble) >>> linear.weight Parameter containing: tensor([[ 0.3741+0.j, 0.2382+0.j], [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128) >>> linear(torch.ones(3, 2, dtype=torch.cdouble)) tensor([[0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
- to_empty(*, device: Union[str, torch.device]) torch.nn.modules.module.T ¶
Moves the parameters and buffers to the specified device without copying storage.
- Parameters
device (
torch.device
) – The desired device of the parameters and buffers in this module.- Returns
self
- Return type
Module
- train(mode: bool = True) onnxruntime.training.ortmodule.ortmodule.T ¶
Override
train()
to delegate execution to ONNX Runtime
- type(dst_type: Union[torch.dtype, str]) torch.nn.modules.module.T ¶
Casts all parameters and buffers to
dst_type
.Note
This method modifies the module in-place.
- Parameters
dst_type (type or string) – the desired type
- Returns
self
- Return type
Module
- xpu(device: Optional[Union[int, torch.device]] = None) torch.nn.modules.module.T ¶
Moves all model parameters and buffers to the XPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized.
Note
This method modifies the module in-place.
- Parameters
device (int, optional) – if specified, all parameters will be copied to that device
- Returns
self
- Return type
Module
- zero_grad(set_to_none: bool = False) None ¶
Sets gradients of all model parameters to zero. See similar function under
torch.optim.Optimizer
for more context.- Parameters
set_to_none (bool) – instead of setting to zero, set the grads to None. See
torch.optim.Optimizer.zero_grad()
for details.