The Nvinfer configuration file is used in the nvinfer plugin; see theDeepstream plugin manualfor more details. The following are key parameters for running the MaskRCNN model: uff-file=<PathtoMRCNNuffmodel>parse-bbox-instance-mask-func-name=<postprocessparsername>custom-lib-path=<pathtopostproc...
DeepStream SDK is a streaming analytic toolkit to accelerate building AI-based video analytic applications. This section will describe how to deploy your trained model to DeepStream SDK. To deploy a model trained by TAO to DeepStream we have two options: Option 1: Integrate the .etlt model dire...
Sample Usage Batch size used for inference will be same as--max_batch_sizeused during engine generation. Here’s an example of using theinferencecommand to run inference with the TensorRT engine: taodeploymask_rcnninference-m/export/int8.engine\-e/workspace/default_spec.txt\-i/workspace/raw-...
Option 3 (Deprecated for x86 devices): Generate a device-specific optimized TensorRT engine using TAO Converter. Machine-specific optimizations are done as part of the engine creation process, so a distinct engine should be generated for each environment and hardware configuration. If the TensorRT or...
Option 3 (Deprecated for x86 devices): Generate a device-specific optimized TensorRT engine using TAO Converter. Machine-specific optimizations are done as part of the engine creation process, so a distinct engine should be generated for each environment and hardware configuration. If the TensorRT or...