INFO: onnx_op_type: Resize onnx_op_name: /backbone/backbone.0/Resize INFO: input_name.1: /backbone/backbone.0/Constant_output_0 shape: [1, 1, 480, 480] dtype: <class 'numpy.float32'> INFO: input_name.2: shape: None dtype: None INFO: input_name.3: shape: None dtype: None ...
Merge branch 'main' of https://github.com/onnx/onnx into map 321dc49 github-advanced-security bot found potential problems Jun 21, 2024 View reviewed changes onnx/reference/custom_element_types.py float8e5m2fnuz = np.dtype((np.uint8, {"e5m2fnuz": (np.uint8, 0)})) uint4...
elem_type=onnx.TensorProto.FLOAT, shape=shape) model.graph.output.append(out_node) onnx.save_model(model, model_path) onnx_rt_sess = rt.InferenceSession(model_path) end_node_names = end_node_names if end_node_names else [onnx_rt_sess.get_outputs()[0].name] feed_dict = {onnx_...
when "scales" of upsample is const, onnx will do some check at shape_inference time. But the checker only look at the float_data field while google's protobuf actually supports putting data at raw_data field.
Description I am having a pytorch model which is exported to onnx format. I am trying to use that model in onnxruntime. But I am getting the error mentioned in the issue System information OS Platform and Distribution : Linux Ubuntu 16.0...
134 prog = frontend_converter(model, **kwargs) 135 common_pass(prog) 136 ~/opt/anaconda3/envs/torch/lib/python3.8/site-packages/coremltools/converters/mil/converter.py in __call__(self, *args, **kwargs) 82 from .frontend.torch import load 83 ---> 84 return load(*args, **kwargs...