$env:TENSORRT_DIR = "F:\env\TensorRT" # Windows: 上边命令代表新建一个系统变量,变量名为:TENSORRT_DIR 变量值为:F:\env\TensorRT # Linux: vim ~/.bashrc #在最后一行加入 export TENSORRT_DIR=/home/gy77/TensorRT source ~/.bashrc $env:Path = "F:\env\TensorRT\lib" # Windows: 上边命令代表...
AssertionError: Failed to create TensorRT engine 2022-08-08 09:41:08,084 - mmdeploy - ERROR - mmdeploy.backend.tensorrt.onnx2tensorrt.onnx2tensorrt with Call id: 1 failed. exit. Activity brilliant-soilder commented on Aug 8, 2022 brilliant-soilder on Aug 8, 2022 Author (trt2) E:\mm...
File "D:\Anaconda3\envs\aoc\lib\site-packages\mmdeploy\backend\tensorrt\onnx2tensorrt.py", line 79, in onnx2tensorrt from_onnx( File "D:\Anaconda3\envs\aoc\lib\site-packages\mmdeploy\backend\tensorrt\utils.py", line 153, in from_onnx assert engine is not None, 'Failed to create ...
解决方法:出现该问题的原因是因为tensorrt中没有mmdeploy新增的算子,因此需要将构建镜像时编译生成的库,拷贝到tritonserver可以使用的地方。 # 启动一个容器docker run-it--rm--name temp172.18.18.222:5000/schinper/ai-train:schiper_deploy_xavier_v1.0/bin/bash# 在另外一个console下执行拷贝docker cp/root/wor...
sync #1493 to support TorchAllocator as TensorRT Gpu Allocator and fix DCNv2 tensorrt plugin error (#1519) Add md link check github action (#1320) Remove cudnn dependency for transform 'mmaction2::format_shape' (#1509) Refactor rewriter context for MMRazor (#1483) Add is_batched argument ...
First, thanks a lot for your amazing job, it is so helpful! Describe the bug I tried to convert a pretrained model of PointPillars (Kitti 3class) to tensorrt but the conversion failed. To be noted that the conversion to ONNX is ok (I did...
When i run the demo : python ./tools/deploy.py \ configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py \ $PATH_TO_MMDET/configs/retinanet/retinanet_r18_fpn_1x_coco.py \ retinanet_r18_fpn_1x_coco_20220407_171055-614fd399...
(mm) ubuntu@y9000p:/work/COCO/mmdeploy$ python tools/deploy.py configs/mmdet/detection/detection_tensorrt_dynamic-416x416-864x864.py ../mmdetection/configs/yolox/yolox_s_8x8_300e_coco.py ../checkpoints/yolox_s_8x8_300e_coco_20211121_095711-4592a793.pth demo/demo.jpg --work-dir work...
When I try to load the RTMDet-Inst end2end.onnx model created using mmdeploy into a tensorRT python script to build the engine I get the following error: [TRT] [E] 4: [graphShapeAnalyzer.cpp::nvinfer1::builder::`anonymous-namespace'::ShapeAnalyzerImpl::processCheck::862] Error Code ...
25 - mmengine - INFO - Successfully loaded tensorrt plugins from e:\openmmlab\mmdeploy\mmdeploy\lib\mmdeploy_tensorrt_ops.dll[09/06/2023-20:44:26] [TRT] [I] [MemUsageChange] Init CUDA: CPU +479, GPU +0, now: CPU 19001, GPU 915 (MiB)[09/06/2023-20:44:26] [TRT] [I] [...