Command lines#

  1. Automatically creates an asv benchmark

  2. Computes statistics on an ONNX graph

  3. Converts and compares an ONNX file

  4. Converts asv results into csv

  5. Exports an ONNX graph into a python code creating the same graph.

  6. Investigates whether or not the decomposing einsum is faster.

  7. Measures model latency

  8. Measures model latency

  9. Optimizes an ONNX graph

  10. Plots an ONNX graph as text

  11. Replays a benchmark of stored converted models by validate_runtime

  12. Validates a runtime against scikit-learn

Automatically creates an asv benchmark

The command creates a benchmark based on asv module. It does not run it.

Example:

python -m mlprodict asv_bench --models LogisticRegression,LinearRegression

<<<

python -m mlprodict asv_bench --help

>>>

usage: asv_bench [-h] [-l LOCATION] [-o OPSET_MIN] [-op OPSET_MAX]
                 [-r RUNTIME] [-m MODELS] [-s SKIP_MODELS] [-e EXTENDED_LIST]
                 [-d DIMS] [-n N_FEATURES] [-dt DTYPE] [-v VERBOSE] [-c CLEAN]
                 [-f FLAT] [-co CONF_PARAMS] [-b BUILD] [-a ADD_PYSPY]
                 [--env ENV] [-ma MATRIX]

Creates an `asv` benchmark in a folder but does not run it.

optional arguments:
  -h, --help            show this help message and exit
  -l LOCATION, --location LOCATION
                        location of the benchmark (default: asvsklonnx)
  -o OPSET_MIN, --opset_min OPSET_MIN
                        tries every conversion from this minimum opset, `-1`
                        to get the current opset defined by module onnx
                        (default: -1)
  -op OPSET_MAX, --opset_max OPSET_MAX
                        tries every conversion up to maximum opset, `-1` to
                        get the current opset defined by module onnx (default:
                        )
  -r RUNTIME, --runtime RUNTIME
                        runtime to check, *scikit-learn*, *python*,
                        *python_compiled* compiles the graph structure and is
                        more efficient when the number of observations is
                        small, *onnxruntime1* to check `onnxruntime`,
                        *onnxruntime2* to check every ONNX node independently
                        with onnxruntime, many runtime can be checked at the
                        same time if the value is a comma separated list
                        (default: scikit-learn,python_compiled)
  -m MODELS, --models MODELS
                        list of models to test or empty string to test them
                        all (default: )
  -s SKIP_MODELS, --skip_models SKIP_MODELS
                        models to skip (default: )
  -e EXTENDED_LIST, --extended_list EXTENDED_LIST
                        extends the list of :epkg:`scikit-learn` converters
                        with converters implemented in this module (default:
                        True)
  -d DIMS, --dims DIMS  number of observations to try (default:
                        1,10,100,1000,10000)
  -n N_FEATURES, --n_features N_FEATURES
                        number of features to try (default: 4,20)
  -dt DTYPE, --dtype DTYPE
                        '32' or '64' or None for both, limits the test to one
                        specific number types (default: )
  -v VERBOSE, --verbose VERBOSE
                        integer from 0 (None) to 2 (full verbose) (default: 1)
  -c CLEAN, --clean CLEAN
                        clean the folder first, otherwise overwrites the
                        content (default: True)
  -f FLAT, --flat FLAT  one folder for all files or subfolders (default:
                        False)
  -co CONF_PARAMS, --conf_params CONF_PARAMS
                        to overwrite some of the configuration parameters,
                        format ``name,value;name2,value2`` (default: )
  -b BUILD, --build BUILD
                        location of the outputs (env, html, results) (default:
                        )
  -a ADD_PYSPY, --add_pyspy ADD_PYSPY
                        add an extra folder with code to profile each
                        configuration (default: False)
  --env ENV             default environment or ``same`` to use the current one
                        (default: )
  -ma MATRIX, --matrix MATRIX
                        specifies versions for a module as a json string,
                        example: ``{'onnxruntime': ['1.1.1', '1.1.2']}``, if a
                        package name starts with `'~'`, the package is removed
                        (default: )

(original entry : asv_bench.py:docstring of mlprodict.cli.asv_bench.asv_bench, line 40)

Computes statistics on an ONNX graph

The command computes statistics on an ONNX model.

<<<

python -m mlprodict onnx_stats --help

>>>

usage: onnx_stats [-h] [-n NAME] [-o OPTIM] [-k KIND]

Computes statistics on an ONNX model.

optional arguments:
  -h, --help            show this help message and exit
  -n NAME, --name NAME  filename (default: None)
  -o OPTIM, --optim OPTIM
                        computes statistics before an after optimisation was
                        done (default: False)
  -k KIND, --kind KIND  kind of statistics, if left unknown, prints out the
                        metadata, possible values: * `io`: prints input and
                        output name, type, shapes * `node`: prints the
                        distribution of node types * `text`: printts a text
                        summary (default: )

(original entry : optimize.py:docstring of mlprodict.cli.optimize.onnx_stats, line 11)

Converts and compares an ONNX file

The command converts and validates a scikit-learn model. An example to check the prediction of a logistic regression.

import os
import pickle
import pandas
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from mlprodict.__main__ import main
from mlprodict.cli import convert_validate

iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, _ = train_test_split(X, y, random_state=11)
clr = LogisticRegression()
clr.fit(X_train, y_train)

pandas.DataFrame(X_test).to_csv("data.csv", index=False)
with open("model.pkl", "wb") as f:
    pickle.dump(clr, f)

And the command line to check the predictions using a command line.

convert_validate --pkl model.pkl --data data.csv
                 --method predict,predict_proba
                 --name output_label,output_probability
                 --verbose 1

<<<

python -m mlprodict convert_validate --help

>>>

usage: convert_validate [-h] [--pkl PKL] [-d DATA] [-s SCHEMA] [-m METHOD]
                        [-n NAME] [-t TARGET_OPSET] [-o OUTONNX] [-r RUNTIME]
                        [-me METRIC] [-u USE_DOUBLE] [-no NOSHAPE] [-op OPTIM]
                        [-re REWRITE_OPS] [-opt OPTIONS] [-v VERBOSE]
                        [-reg REGISTER]

Converts a model stored in *pkl* file and measure the differences between the
model and the ONNX predictions.

optional arguments:
  -h, --help            show this help message and exit
  --pkl PKL             pickle file (default: None)
  -d DATA, --data DATA  data file, loaded with pandas, converted to a single
                        array, the data is used to guess the schema if
                        *schema* not specified (default: )
  -s SCHEMA, --schema SCHEMA
                        initial type of the model (default: )
  -m METHOD, --method METHOD
                        method to call (default: predict)
  -n NAME, --name NAME  output name (default: Y)
  -t TARGET_OPSET, --target_opset TARGET_OPSET
                        target opset (default: )
  -o OUTONNX, --outonnx OUTONNX
                        produced ONNX model (default: model.onnx)
  -r RUNTIME, --runtime RUNTIME
                        runtime to use to compute predictions, 'python',
                        'python_compiled', 'onnxruntime1' or 'onnxruntime2'
                        (default: python)
  -me METRIC, --metric METRIC
                        the metric 'l1med' is given by function
                        :func:`measure_relative_difference <mlprodict.onnxrt.v
                        alidate.validate_difference.measure_relative_differenc
                        e>` (default: l1med)
  -u USE_DOUBLE, --use_double USE_DOUBLE
                        use double for the runtime if possible, two possible
                        options, ``"float64"`` or ``'switch'``, the first
                        option produces an ONNX file with doubles, the second
                        option loads an ONNX file (float or double) and
                        replaces matrices in ONNX with the matrices coming
                        from the model, this second way is just for testing
                        purposes (default: )
  -no NOSHAPE, --noshape NOSHAPE
                        run the conversion with no shape information (default:
                        False)
  -op OPTIM, --optim OPTIM
                        applies optimisations on the first ONNX graph, use
                        'onnx' to reduce the number of node Identity and
                        redundant subgraphs (default: onnx)
  -re REWRITE_OPS, --rewrite_ops REWRITE_OPS
                        rewrites some converters from :epkg:`sklearn-onnx`
                        (default: True)
  -opt OPTIONS, --options OPTIONS
                        additional options for conversion, dictionary as a
                        string (default: )
  -v VERBOSE, --verbose VERBOSE
                        verbose level (default: 1)
  -reg REGISTER, --register REGISTER
                        registers additional converters implemented by this
                        package (default: True)

(original entry : convert_validate.py:docstring of mlprodict.cli.convert_validate.convert_validate, line 37)

Converts asv results into csv

The command converts asv results into csv.

Example:

python -m mlprodict asv2csv -f <folder> -o result.csv

<<<

python -m mlprodict asv2csv--help

>>>

Command not found: 'asv2csv--help'.

Available commands:

    asv2csv            Converts results produced by :epkg:`asv` into :epkg:`csv`.
    asv_bench          Creates an :epkg:`asv` benchmark in a folder
    benchmark_doc      Runs the benchmark published into the documentation
    benchmark_replay   The command rerun a benchmark if models were stored by
    convert_validate   Converts a model stored in *pkl* file and measure the differences
    dynamic_doc        Generates the documentation for ONNX operators.
    einsum_test        Investigates whether or not the decomposing einsum is faster.
    latency            Measures the latency of a model (python API).
    onnx_code          Exports an ONNX graph into a python code creating
    onnx_optim         Optimizes an ONNX model.
    onnx_stats         Computes statistics on an ONNX model.
    plot_onnx          Plots an ONNX graph on the standard output.
    validate_runtime   Walks through most of :epkg:`scikit-learn` operators

(original entry : asv2csv.py:docstring of mlprodict.cli.asv2csv.asv2csv, line 11)

Exports an ONNX graph into a python code creating the same graph.

The command converts an ONNX graph into a python code generating the same graph. The python code may use onnx syntax, numpy syntax or tf2onnx syntax.

Example:

python -m mlprodict onnx_code --filename="something.onnx" --format=onnx

<<<

python -m mlprodict onnx_code --help

>>>

usage: onnx_code [-h] [-f FILENAME] [-fo FORMAT] [-o OUTPUT] [-v VERBOSE]
                 [-n NAME] [-op OPSET]

Exports an ONNX graph into a python code creating the same graph.

optional arguments:
  -h, --help            show this help message and exit
  -f FILENAME, --filename FILENAME
                        onnx file (default: None)
  -fo FORMAT, --format FORMAT
                        format to export too (`onnx`, `tf2onnx`, `numpy`)
                        (default: onnx)
  -o OUTPUT, --output OUTPUT
                        output file to produce or None to print it on stdout
                        (default: )
  -v VERBOSE, --verbose VERBOSE
                        verbosity level (default: 0)
  -n NAME, --name NAME  rewrite the graph name (default: )
  -op OPSET, --opset OPSET
                        overwrite the opset (may not works depending on the
                        format) (default: )

(original entry : onnx_code.py:docstring of mlprodict.cli.onnx_code.onnx_code, line 12)

Investigates whether or not the decomposing einsum is faster.

The command checks whether or not decomposing an einsum function is faster than einsum implementation.

Example:

python -m mlprodict einsum_test --equation="abc,cd->abd" --output=res.csv

<<<

python -m mlprodict einsum_test --help

>>>

usage: einsum_test [-h] [-e EQUATION] [-s SHAPE] [-p PERM] [-r RUNTIME]
                   [-v VERBOSE] [-o OUTPUT] [-n NUMBER] [-re REPEAT]

Investigates whether or not the decomposing einsum is faster.

optional arguments:
  -h, --help            show this help message and exit
  -e EQUATION, --equation EQUATION
                        einsum equation to test (default: abc,cd->abd)
  -s SHAPE, --shape SHAPE
                        an integer (all dimension gets the same size) or a
                        list of shapes in a string separated with `;`) or a
                        list of integer to try out multiple shapes, example:
                        `5`, `(5,5,5),(5,5)`, `5,6` (default: 30)
  -p PERM, --perm PERM  check on permutation or all letter permutations
                        (default: False)
  -r RUNTIME, --runtime RUNTIME
                        `'numpy'`, `'python'`, `'onnxruntime'` (default:
                        python)
  -v VERBOSE, --verbose VERBOSE
                        verbose (default: 1)
  -o OUTPUT, --output OUTPUT
                        output file (usually a csv file or an excel file), it
                        requires pandas (default: )
  -n NUMBER, --number NUMBER
                        usual parameter to measure a function (default: 5)
  -re REPEAT, --repeat REPEAT
                        usual parameter to measure a function (default: 5)

(original entry : einsum.py:docstring of mlprodict.cli.einsum.einsum_test, line 17)

Measures model latency

The command generates random inputs and call many times the model on these inputs. It returns the processing time for one iteration.

Example:

python -m mlprodict latency --model "model.onnx"

<<<

python -m mlprodict latency --help

>>>

usage: latency [-h] [-m MODEL] [--law LAW] [-s SIZE] [-n NUMBER] [-r REPEAT]
               [-ma MAX_TIME] [-ru RUNTIME] [-d DEVICE] [--fmt FMT]
               [-p PROFILING] [-pr PROFILE_OUTPUT]

Measures the latency of a model (python API).

optional arguments:
  -h, --help            show this help message and exit
  -m MODEL, --model MODEL
                        ONNX graph (default: None)
  --law LAW             random law used to generate fake inputs (default:
                        normal)
  -s SIZE, --size SIZE  batch size, it replaces the first dimension of every
                        input if it is left unknown (default: 1)
  -n NUMBER, --number NUMBER
                        number of calls to measure (default: 10)
  -r REPEAT, --repeat REPEAT
                        number of times to repeat the experiment (default: 10)
  -ma MAX_TIME, --max_time MAX_TIME
                        if it is > 0, it runs as many time during that period
                        of time (default: 0)
  -ru RUNTIME, --runtime RUNTIME
                        available runtime (default: onnxruntime)
  -d DEVICE, --device DEVICE
                        device, `cpu`, `cuda:0` or a list of providers
                        `CPUExecutionProvider, CUDAExecutionProvider (default:
                        cpu)
  --fmt FMT             None or `csv`, it then returns a string formatted like
                        a csv file (default: )
  -p PROFILING, --profiling PROFILING
                        if True, profile the execution of every node, if can
                        be sorted by name or type, the value for this
                        parameter should e in `(None, 'name', 'type')`
                        (default: )
  -pr PROFILE_OUTPUT, --profile_output PROFILE_OUTPUT
                        output name for the profiling if profiling is
                        specified (default: profiling.csv)

(original entry : validate.py:docstring of mlprodict.cli.validate.latency, line 22)

Measures model latency

The command generates random inputs and call many times the model on these inputs. It returns the processing time for one iteration.

Example:

python -m mlprodict latency --model "model.onnx"

<<<

python -m mlprodict latency --help

>>>

usage: latency [-h] [-m MODEL] [--law LAW] [-s SIZE] [-n NUMBER] [-r REPEAT]
               [-ma MAX_TIME] [-ru RUNTIME] [-d DEVICE] [--fmt FMT]
               [-p PROFILING] [-pr PROFILE_OUTPUT]

Measures the latency of a model (python API).

optional arguments:
  -h, --help            show this help message and exit
  -m MODEL, --model MODEL
                        ONNX graph (default: None)
  --law LAW             random law used to generate fake inputs (default:
                        normal)
  -s SIZE, --size SIZE  batch size, it replaces the first dimension of every
                        input if it is left unknown (default: 1)
  -n NUMBER, --number NUMBER
                        number of calls to measure (default: 10)
  -r REPEAT, --repeat REPEAT
                        number of times to repeat the experiment (default: 10)
  -ma MAX_TIME, --max_time MAX_TIME
                        if it is > 0, it runs as many time during that period
                        of time (default: 0)
  -ru RUNTIME, --runtime RUNTIME
                        available runtime (default: onnxruntime)
  -d DEVICE, --device DEVICE
                        device, `cpu`, `cuda:0` or a list of providers
                        `CPUExecutionProvider, CUDAExecutionProvider (default:
                        cpu)
  --fmt FMT             None or `csv`, it then returns a string formatted like
                        a csv file (default: )
  -p PROFILING, --profiling PROFILING
                        if True, profile the execution of every node, if can
                        be sorted by name or type, the value for this
                        parameter should e in `(None, 'name', 'type')`
                        (default: )
  -pr PROFILE_OUTPUT, --profile_output PROFILE_OUTPUT
                        output name for the profiling if profiling is
                        specified (default: profiling.csv)

(original entry : validate_latency.py:docstring of mlprodict.onnxrt.validate.validate_latency.latency, line 19)

Optimizes an ONNX graph

The command optimizes an ONNX model.

<<<

python -m mlprodict onnx_optim --help

>>>

usage: onnx_optim [-h] [-n NAME] [-o OUTFILE] [-r RECURSIVE] [-op OPTIONS]
                  [-v VERBOSE]

Optimizes an ONNX model.

optional arguments:
  -h, --help            show this help message and exit
  -n NAME, --name NAME  filename (default: None)
  -o OUTFILE, --outfile OUTFILE
                        output filename (default: )
  -r RECURSIVE, --recursive RECURSIVE
                        processes the main graph and the subgraphs (default:
                        True)
  -op OPTIONS, --options OPTIONS
                        options, kind of optimize to do (default: )
  -v VERBOSE, --verbose VERBOSE
                        display statistics before and after the optimisation
                        (default: 0)

(original entry : optimize.py:docstring of mlprodict.cli.optimize.onnx_optim, line 10)

Plots an ONNX graph as text

The command shows the ONNX graphs as a text on the standard output.

Example:

python -m mlprodict plot_onnx --filename="something.onnx" --format=simple

<<<

python -m mlprodict plot_onnx --help

>>>

usage: plot_onnx [-h] [-f FILENAME] [-fo FORMAT] [-v VERBOSE] [-o OUTPUT]

Plots an ONNX graph on the standard output.

optional arguments:
  -h, --help            show this help message and exit
  -f FILENAME, --filename FILENAME
                        onnx file (default: None)
  -fo FORMAT, --format FORMAT
                        format to export too (`simple`, `tree`, `dot`, `io`,
                        `mat`, `raw`) (default: onnx)
  -v VERBOSE, --verbose VERBOSE
                        verbosity level (default: 0)
  -o OUTPUT, --output OUTPUT
                        output file to produce or None to print it on stdout
                        (default: )

(original entry : onnx_code.py:docstring of mlprodict.cli.onnx_code.plot_onnx, line 10)

Replays a benchmark of stored converted models by validate_runtime

The command rerun a benchmark if models were stored by command line vaidate_runtime.

Example:

python -m mlprodict benchmark_replay --folder dumped --out bench_results.xlsx

Parameter --time_kwargs may be used to reduce or increase bencharmak precisions. The following value tells the function to run a benchmarks with datasets of 1 or 10 number, to repeat a given number of time number predictions in one row. The total time is divided by number \times repeat. Parameter --time_kwargs_fact may be used to increase these number for some specific models. 'lin' multiplies by 10 number when the model is linear.

-t "{\"1\":{\"number\":10,\"repeat\":10},\"10\":{\"number\":5,\"repeat\":5}}"

<<<

python -m mlprodict benchmark_replay --help

>>>

usage: benchmark_replay [-h] [-f FOLDER] [-r RUNTIME] [-t TIME_KWARGS]
                        [-s SKIP_LONG_TEST] [-ti TIME_KWARGS_FACT]
                        [-tim TIME_LIMIT] [--out OUT] [-v VERBOSE]

The command rerun a benchmark if models were stored by command line
`vaidate_runtime`.

optional arguments:
  -h, --help            show this help message and exit
  -f FOLDER, --folder FOLDER
                        where to find pickled files (default: None)
  -r RUNTIME, --runtime RUNTIME
                        runtimes, comma separated list (default: python)
  -t TIME_KWARGS, --time_kwargs TIME_KWARGS
                        a dictionary which defines the number of rows and the
                        parameter *number* and *repeat* when benchmarking a
                        model, the value must follow `json` format (default: )
  -s SKIP_LONG_TEST, --skip_long_test SKIP_LONG_TEST
                        skips tests for high values of N if they seem too long
                        (default: True)
  -ti TIME_KWARGS_FACT, --time_kwargs_fact TIME_KWARGS_FACT
                        to multiply number and repeat in *time_kwargs*
                        depending on the model (see
                        :func:`_multiply_time_kwargs <mlprodict.onnxrt.validat
                        e.validate_helper._multiply_time_kwargs>`) (default: )
  -tim TIME_LIMIT, --time_limit TIME_LIMIT
                        to stop benchmarking after this limit of time
                        (default: 4)
  --out OUT             output raw results into this file (excel format)
                        (default: )
  -v VERBOSE, --verbose VERBOSE
                        integer from 0 (None) to 2 (full verbose) (default: 1)

(original entry : replay.py:docstring of mlprodict.cli.replay.benchmark_replay, line 19)

Validates a runtime against scikit-learn

The command walks through all scikit-learn operators, tries to convert them, checks the predictions, and produces a report.

Example:

python -m mlprodict validate_runtime --models LogisticRegression,LinearRegression

Following example benchmarks models sklearn.ensemble.RandomForestRegressor, sklearn.tree.DecisionTreeRegressor, it compares onnxruntime against scikit-learn for opset 10.

python -m mlprodict validate_runtime -v 1 -o 10 -op 10 -c 1 -r onnxruntime1
       -m RandomForestRegressor,DecisionTreeRegressor -out bench_onnxruntime.xlsx -b 1

Parameter --time_kwargs may be used to reduce or increase bencharmak precisions. The following value tells the function to run a benchmarks with datasets of 1 or 10 number, to repeat a given number of time number predictions in one row. The total time is divided by number \times repeat. Parameter --time_kwargs_fact may be used to increase these number for some specific models. 'lin' multiplies by 10 number when the model is linear.

-t "{\"1\":{\"number\":10,\"repeat\":10},\"10\":{\"number\":5,\"repeat\":5}}"

The following example dumps every model in the list:

python -m mlprodict validate_runtime --out_raw raw.csv --out_summary sum.csv
       --models LinearRegression,LogisticRegression,DecisionTreeRegressor,DecisionTreeClassifier
       -r python,onnxruntime1 -o 10 -op 10 -v 1 -b 1 -dum 1
       -du model_dump -n 20,100,500 --out_graph benchmark.png --dtype 32

The command line generates a graph produced by function plot_validate_benchmark.

<<<

python -m mlprodict validate_runtime --help

>>>

usage: validate_runtime [-h] [-v VERBOSE] [-o OPSET_MIN] [-op OPSET_MAX]
                        [-c CHECK_RUNTIME] [-r RUNTIME] [-d DEBUG] [-m MODELS]
                        [-ou OUT_RAW] [-out OUT_SUMMARY] [-du DUMP_FOLDER]
                        [-dum DUMP_ALL] [-b BENCHMARK] [-ca CATCH_WARNINGS]
                        [-a ASSUME_FINITE] [-ve VERSIONS] [-s SKIP_MODELS]
                        [-e EXTENDED_LIST] [-se SEPARATE_PROCESS]
                        [-t TIME_KWARGS] [-n N_FEATURES]
                        [--out_graph OUT_GRAPH] [-f FORCE_RETURN] [-dt DTYPE]
                        [-sk SKIP_LONG_TEST] [-nu NUMBER] [-re REPEAT]
                        [-ti TIME_KWARGS_FACT] [-tim TIME_LIMIT] [-n_ N_JOBS]

Walks through most of :epkg:`scikit-learn` operators or model or predictor or
transformer, tries to convert them into `ONNX` and computes the predictions
with a specific runtime.

optional arguments:
  -h, --help            show this help message and exit
  -v VERBOSE, --verbose VERBOSE
                        integer from 0 (None) to 2 (full verbose) (default: 1)
  -o OPSET_MIN, --opset_min OPSET_MIN
                        tries every conversion from this minimum opset, -1 to
                        get the current opset (default: -1)
  -op OPSET_MAX, --opset_max OPSET_MAX
                        tries every conversion up to maximum opset, -1 to get
                        the current opset (default: )
  -c CHECK_RUNTIME, --check_runtime CHECK_RUNTIME
                        to check the runtime and not only the conversion
                        (default: True)
  -r RUNTIME, --runtime RUNTIME
                        runtime to check, python, onnxruntime1 to check
                        `onnxruntime`, onnxruntime2 to check every *ONNX* node
                        independently with onnxruntime, many runtime can be
                        checked at the same time if the value is a comma
                        separated list (default: python)
  -d DEBUG, --debug DEBUG
                        stops whenever an exception is raised, only if
                        *separate_process* is False (default: False)
  -m MODELS, --models MODELS
                        comma separated list of models to test or empty string
                        to test them all (default: )
  -ou OUT_RAW, --out_raw OUT_RAW
                        output raw results into this file (excel format)
                        (default: model_onnx_raw.xlsx)
  -out OUT_SUMMARY, --out_summary OUT_SUMMARY
                        output an aggregated view into this file (excel
                        format) (default: model_onnx_summary.xlsx)
  -du DUMP_FOLDER, --dump_folder DUMP_FOLDER
                        folder where to dump information (pickle) in case of
                        mismatch (default: )
  -dum DUMP_ALL, --dump_all DUMP_ALL
                        dumps all models, not only the failing ones (default:
                        False)
  -b BENCHMARK, --benchmark BENCHMARK
                        run benchmark (default: False)
  -ca CATCH_WARNINGS, --catch_warnings CATCH_WARNINGS
                        catch warnings (default: True)
  -a ASSUME_FINITE, --assume_finite ASSUME_FINITE
                        See `config_context <https://scikit-learn.org/stable/m
                        odules/generated/sklearn.config_context.html>`_, If
                        True, validation for finiteness will be skipped,
                        saving time, but leading to potential crashes. If
                        False, validation for finiteness will be performed,
                        avoiding error. (default: True)
  -ve VERSIONS, --versions VERSIONS
                        add columns with versions of used packages, `numpy`,
                        :epkg:`scikit-learn`, `onnx`, `onnxruntime`,
                        :epkg:`sklearn-onnx` (default: False)
  -s SKIP_MODELS, --skip_models SKIP_MODELS
                        models to skip (default: )
  -e EXTENDED_LIST, --extended_list EXTENDED_LIST
                        extends the list of :epkg:`scikit-learn` converters
                        with converters implemented in this module (default:
                        True)
  -se SEPARATE_PROCESS, --separate_process SEPARATE_PROCESS
                        run every model in a separate process, this option
                        must be used to run all model in one row even if one
                        of them is crashing (default: False)
  -t TIME_KWARGS, --time_kwargs TIME_KWARGS
                        a dictionary which defines the number of rows and the
                        parameter *number* and *repeat* when benchmarking a
                        model, the value must follow `json` format (default: )
  -n N_FEATURES, --n_features N_FEATURES
                        change the default number of features for a specific
                        problem, it can also be a comma separated list
                        (default: )
  --out_graph OUT_GRAPH
                        image name, to output a graph which summarizes a
                        benchmark in case it was run (default: )
  -f FORCE_RETURN, --force_return FORCE_RETURN
                        forces the function to return the results, used when
                        the results are produces through a separate process
                        (default: False)
  -dt DTYPE, --dtype DTYPE
                        '32' or '64' or None for both, limits the test to one
                        specific number types (default: )
  -sk SKIP_LONG_TEST, --skip_long_test SKIP_LONG_TEST
                        skips tests for high values of N if they seem too long
                        (default: False)
  -nu NUMBER, --number NUMBER
                        to multiply number values in *time_kwargs* (default:
                        1)
  -re REPEAT, --repeat REPEAT
                        to multiply repeat values in *time_kwargs* (default:
                        1)
  -ti TIME_KWARGS_FACT, --time_kwargs_fact TIME_KWARGS_FACT
                        to multiply number and repeat in *time_kwargs*
                        depending on the model (see
                        :func:`_multiply_time_kwargs <mlprodict.onnxrt.validat
                        e.validate_helper._multiply_time_kwargs>`) (default:
                        lin)
  -tim TIME_LIMIT, --time_limit TIME_LIMIT
                        to stop benchmarking after this limit of time
                        (default: 4)
  -n_ N_JOBS, --n_jobs N_JOBS
                        force the number of jobs to have this value, by
                        default, it is equal to the number of CPU (default: 0)

(original entry : validate.py:docstring of mlprodict.cli.validate.validate_runtime, line 66)