onnx.shape_inference¶
- onnx.shape_inference.infer_shapes(model: Union[onnx.onnx_ml_pb2.ModelProto, bytes], check_type: bool = False, strict_mode: bool = False, data_prop: bool = False) onnx.onnx_ml_pb2.ModelProto [source]¶
- onnx.shape_inference.infer_shapes_path(model_path: str, output_path: str = '', check_type: bool = False, strict_mode: bool = False, data_prop: bool = False) None [source]¶
Take model path for shape_inference same as infer_shape; it support >2GB models Directly output the inferred model to the output_path; Default is the original model path