This does not appear to be caused by numpy, but was related to exporting the model with a torch.int and then using it with a np.int64. Exporting with torch.long worked on the only machine where I've seen this bug occur and needs to be tested on Windows with the DML provider. ssube...
@FrancescoSaverioZuppichiniHi ,I have the same problem as you, I can reason correctly but it takes a long time, and I also warn INT64 that it needs to be converted to INT32, how do you solve the problem of long reasoning time in the end?
to= TensorProto.INT64 ) new_nodes += [new_scale_node, new_add_node] else: new_nodes += [node] returnnew_nodes if__name__=='__main__': model = onnx.load('resize_conv_add.onnx') graph = model.graph nodes = graph.node opset_version = model.opse...
V : tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(string), tensor(bool), tensor(complex64), tensor(complex128), seq(tensor(uint8)), seq(tensor(u...
T2 : tensor(float16), tensor(float), tensor(double), tensor(bfloat16), tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(bool) Constrain output types to all numeric tensors and bool tensors. ...
ONNX_DATA_TYPE_UINT8 3: ONNX_DATA_TYPE_INT8 4: ONNX_DATA_TYPE_UINT16 5: ONNX_DATA_TYPE_INT16 6: ONNX_DATA_TYPE_INT32 7: ONNX_DATA_TYPE_INT64 8: ONNX_DATA_TYPE_STRING 9: ONNX_DATA_TYPE_BOOL10: ONNX_DATA_TYPE_FLOAT1611: ONNX_DATA_TYPE_DOUBLE12: ONNX_DATA_TYPE_UINT32...
fromskl2onnx.common.data_typesimportFloatTensorType, Int64TensorType, DoubleTensorTypedefconvert_dataframe_schema(df, drop=None, batch_axis=False):inputs = [] nrows =Noneifbatch_axiselse1fork, vinzip(df.columns, df.dtypes):ifdropisnotNoneandkindrop:continueifv =='int64': t = Int64TensorType...
{// The version of the IR this model targets. See Version enum above.// This field MUST be present.optional int64 ir_version=1;// The OperatorSets this model relies on.// All ModelProtos MUST have at least one entry that// specifies which version of the ONNX OperatorSet is// being...
Boxes数据类型是浮点数Labels数据类型是int64scores数据类型是浮点数 而我在ONNXRUNTIME C++获取输出的语句如下: constint* labels_prob = ort_outputs[1].GetTensorMutableData;// labelscv::Matdet_labels(boxes_shape[0],1, CV_32S, (int*)labels_prob); ...
onnx_model = onnx.load('path/to/the/model.onnx')# 加载 onnx 模型 2,Loading an ONNX Model with External Data 【默认加载模型方式】如果外部数据(external data)和模型文件在同一个目录下,仅使用onnx.load()即可加载模型,方法见上小节。