onnxcustom: deploy, train machine learned models¶
Examples, tutorial on how to convert machine learned models into ONNX,
implement your own converter or runtime, or even train with ONNX,
onnxruntime.
The documentation introduces onnx, onnxruntime for
inference and training. It implements training classes following
scikit-learn based on onnxruntime-training enabling training
linear models, neural networks on CPU or GPU.
It implements tools to manipulate logs produced NVidia Profiler logs
(convert_trace_to_json
),
tools to manipulate onnx graphs.
Section API summarizes APIs for onnx, onnxruntime, and this package. Section Tutorials explains the logic behind onnx, onnxruntime and this package. It guides the user through all the examples this documentation contains.
Contents
Sources are available on github/onnxcustom. Package is available on pypi, pyquickhelper: automation of many things, and a blog for unclassified topics blog. The tutorial related to scikit-learn has been merged into sklearn-onnx documentation. This package supports ONNX opsets to the latest opset stored in onnxcustom.__max_supported_opset__ which is:
<<<
import onnxcustom
print(onnxcustom.__max_supported_opset__)
>>>
15
Any opset beyond that value is not supported and could fail. That’s for the main set of ONNX functions or domain. Converters for scikit-learn requires another domain, ‘ai.onnxml’ to implement tree. Latest supported options are defined here:
<<<
import pprint
import onnxcustom
pprint.pprint(onnxcustom.__max_supported_opsets__)
>>>
{'': 15, 'ai.onnx.ml': 2}