blog page - 1/3 ==> Blog onnx (8) runtime (3)
blog page - 1/3#
Xop, easy to create onnx graph#
2022-02-27
onnx package has a very verbose API to create ONNX graph. Could you imagine a user to directly write the syntax tree of a program instead of some python code? Creating a ONNX graph is very similar to that task except ONNX language is more simple than python.
…
A few tricks for tf2onnx#
2021-08-12
A few things I tend to forget. To run a specific test on a specific opset.
…
Decompose einsum into numpy operators#
2021-08-11
Notebook Einsum decomposition what function numpy.einsum
does and how it can be decomposed into a series of basic operations,
all available in ONNX. That’s the purpose of function
Function decompose_einsum_equation
.
With function export2numpy
, it is possible to
convert back this ONNX graph into a series of numpy operations.
…
onnxruntime shape [] != None#
2021-08-10
None is the undefined shape, [] is an empty shape. And when shapes do not fit the results, the outputs can be suprising. The following example shows what onnxruntime produces for the same graph except input and output shapes when defined as None and [].
…
ONNX from C##
2021-07-09
This example shows how to compute the predictions of a model using C#.
…
Convert a Lightgbm dump#
Numpy API for ONNX and scikit-learn (part II)#
2021-05-05
This follows blog post Numpy API for ONNX and scikit-learn (part I). It demonstrated how to insert a custom function in a pipeline and still be able to convert that pipeline into ONNX. This blog post shows how to implement a custom transformer.
…
Numpy API for ONNX and scikit-learn (part I)#
2021-05-05
sklearn-onnx converts most of the pipelines including numerical preprocessing or predictors but it fails whenever custom code is involved. That covers the use of FunctionTransformer or a new model inheriting from BaseEstimator. To be successful, the conversion needs a way to convert the custom code into ONNX. The proposed solution here is bypass that complex steps (rewrite a python function with ONNX operators) by directly writing the custom code with ONNX operators. However, even though most of the operator are close to numpy functions, they are not the same. To avoid spending time looking at them, many numpy functions were implementing with ONNX operators. The custom function or predictor can then just be implemented with this API to build a unique ONNX graph executed with a runtime.
…
Parallelization of Random Forest predictions#
2020-11-27
I’ve been struggling to understand why the first implementation of TreeEnsemble could not get as fast as scikit-learn implementation for a RandomForest when the number of observations was 100.000 or above, 100 trees and a depth >= 10. The only difference was that the computation was parallelized by trees and not by observations. These observations are benchmarked in Benchmark Random Forests, Tree Ensemble, (AoS and SoA) (Benchmark Random Forests, Tree Ensemble, Multi-Classification for the multiclass version).
…
x / y != x * (1 / y)#
2020-06-09
I was recently investigating issue onnxruntime/4130 in notebook Discrepencies with ONNX. While looking into a way to solve it, I finally discovered that this is not an easy problem.
…
blog page - 1/3 ==> 2020-11 (1) 2021-05 (2) 2021-07 (2) 2021-08 (3) 2022-02 (1)