Benchmarking with asv#
asv is a popular framework to run evaluate time and peak memory for a script. Following functions automates the create of such a benchmark to compare onnxruntime to scikit-learn for predictions.
mlprodict.asv_benchmark.export_asv_json
(folder, as_df = False, last_one = False, baseline = None, conf = None)
mlprodict.asv_benchmark.create_asv_benchmark
(location, opset_min = -1, opset_max = None, runtime = (‘scikit-learn’, ‘python_compiled’), models = None, skip_models = None, extended_list = True, dims = (1, 10, 100, 10000), n_features = (4, 20), dtype = None, verbose = 0, fLOG = <built-in function print>, clean = True, conf_params = None, filter_exp = None, filter_scenario = None, flat = False, exc = False, build = None, execute = False, add_pyspy = False, env = None, matrix = None)
Creates an asv benchmark in a folder but does not run it.