WARNING: [Torch-TensorRT TorchScript Conversion Context] - - Subnormal FP16 values detected. WARNING: [Torch-TensorRT TorchScript Conversion Context] - - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value. WARNING: [Torch-TensorRT TorchScr...
[2023-05-04 06:28:28 WARNING] - 24 weights are affected by this issue: Detected subnormal FP16 values. [2023-05-04 06:28:28 WARNING] - 3 weights are affected by this issue: Detected values less than smallest positive FP16 subnormal value and converted them to the FP16 minimum subnorm...
[07/28/2023-10:51:11] [W] Weights [name=node_of_582 + node_of_583.weight] had the following issues when converted to FP16: [07/28/2023-10:51:11] [W] - Subnormal FP16 values detected. [07/28/2023-10:51:11] [W] - Values less than smallest positive FP16 Subnormal value det...
[TRT] [W] - Subnormal FP16 values detected. [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights. [TRT] [W] Weights [name=Conv_198.weight] had the following issues when converted to FP16: Thank...
] [2022-12-05 13:55:19 WARNING] Check verbose logs for the list of affected weights. 2022-12-05 13:55:19.402844777 [W:onnxruntime:Default, tensorrt_execution_provider.h:60 log] [2022-12-05 13:55:19 WARNING] - 8 weights are affected by this issue: Detected subnormal FP16 values....
[W] - 225 weights are affected by this issue: Detected subnormal FP16 values. [W] - 1 weights are affected by this issue: Detected values less than smallest positive FP16 subnormal value and converted them to the FP16 minimum subnormalized value. ...
Yes mixed precision is feasible, just export the onnx to FP32, and when you build the engine, you can enable FP16(you don't need to export FP16 model at first) and specify those subnormal layers to FP32. FrancescoSaverioZuppichini commentedon Dec 20, 2022 ...
[09/26/2023-18:39:53] [W] [TRT] - 4 weights are affected by this issue: Detected values less than smallest positive FP16 subnormal value and converted them to the FP16 minimum subnormalized value. [09/26/2023-18:39:53] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in bui...
[08/01/2022-09:43:39] [TRT] [W] Weights [name=onnx::MatMul_3171 + (Unnamed Layer* 313) [Shuffle]] had the following issues when converted to FP16: [08/01/2022-09:43:39] [TRT] [W] - Subnormal FP16 values detected. [08/01/2022-09:43:39] [TRT] [W] If this is not ...