quantized_algorithm,把原始参数打散成的粒度,推荐:参数量化。 quantized_method, default is channel, layer weight only only 1 set. load origin model 例如:load_pytorch float_dtype, gen un scale version optimization_level 0,1,2,3 3 takes more time. target_platform :rk3588 custom_string: a label...
多个输入之间空格间隔,只用于量化阶段,实际推理阶段如果模型的输入为BGR,则实际输入的图片的格式也应该为BGR。 quantized_dtype:量化类型,目前只支持asymmetric_quantized-8quantized_algorithm:计算每层量化参数采用的量化算法,支持的算法有normal,mmse 及kl_divergence。默认为normal normal:速度快,推荐数据20~100张 mmse:...
models:# model output namename:onnx_detection# Original model frameworkplatform:pytorch# Model input file pathmodel_file_path:./resnet18.pt# Describe information such as input and output shapessubgraphs:# model input tensor shapeinput_size_list:-1,3,512,512# input tensor nameinputs:-data# ou...
默认使用channel # target_platform 可以用来配置不同的芯片, 目前支持 rk3566、rk3568、rk3588、rv1103、rv1106, 该参数的值大小写不敏感。 rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], quant_img_RGB2BGR=True, quantized_algorithm='normal', quantized_method='channel', ...
quantized_algorithm='normal', quantized_method='channel', # optimization_level=2, compress_weight=False, # 压缩模型的权值,可以减小rknn模型的大小。默认值为False。 single_core_mode=True, # model_pruning=False, # 修剪模型以减小模型大小,默认值为False。
quantized_dtype: asymmetric_affine-u8 quantized_algorithm: normal optimization_level: 3 mean_values: [[0, 0, 0]] std_values: [[255, 255, 255]] mmse_epoch: 3 do_sparse_network: True output_optimize: 0 batch_size: 100 quantize_input_node: False merge_dequant_layer_and_output_node: Fal...
Exception: E Get outputs failed, error code: RKNN_ERR_FAIL 模型转换,int8量化,设置rknn_batch_size = 2报错, config,设置如下: rknn.config( mean_values=mean, std_values=std, target_platform=PLATEFORM, quantized_algorithm="mmse", optimization_level=3, )...
rknn.config(mean_values=[[0, 0, 0]], std_values=[[128, 128, 128]], target_platform='rv1103', quantized_algorithm="normal") Load the model rknn.load_onnx(model = model_path) Build the model rknn.build(do_quantization=do_quant, dataset=DATASET_PATH) Initialize runtime environment. ...
(quantized_algorithm='normal',quantized_method='channel',target_platform='rk3588',optimization_level=3)print('done')# Load ONNX modelprint('--> Loading model')ret=rknn.load_onnx(model=ONNX_MODEL,inputs=['/emb/Gather_output_0','input_state','scale_ratio'],input_size_list=[ [batch_...
quantized_algorithm='normal', quantized_method='channel', quant_img_RGB2BGR=False, target_platform='rk3588', float_dtype="float16", optimization_level=3, custom_string="this is my rknn model", remove_weight=False, compress_weight=False, ...