ONNX Runtime executes models using the CPU EP (Execution Provider) by default. It’s possible to use theNNAPI EP(Android) or theCore ML EP(iOS) for ORT format models instead by using the appropriateSessionOption
ONNX Runtime executes models using the CPU EP (Execution Provider) by default. It’s possible to use theNNAPI EP(Android) or theCore ML EP(iOS) for ORT format models instead by using the appropriateSessionOptionswhen creating anInferenceSession. These may or may not offer better performance ...