Returns: dict[str, float | ndarray]: Default metrics. <aAcc> float: Overall accuracy on all images. <Acc> ndarray: Per category accuracy, shape (num_classes, ). <Dice> ndarray: Per category dice, shape (num_classes, ). """ dice_result = eval_metrics( results=results, gt_seg_maps=...
Hi, Thanks for all your work. I'm trying to visualize the validation metrics in Weights and Biases using the MMSegWandbHook. I use the following settings: log_config = dict( interval=1000, hooks=[ dict(type='TextLoggerHook', by_epoch=Fal...
test_dataloader = 'val_dataloader' # val_evaluator = dict(type='IoUMetric', iou_metrics=['mIoU'], ignore_index=2) val_evaluator = dict(type='IoUMetric', iou_metrics=['mIoU', 'mDice', 'mFscore']) test_evaluator = val_evaluator mmsegmentation/configs/pspnet/pspnet_r50-d8_4xb2-40k_la...
Provide feedback We read every piece of feedback, and take your input very seriously. Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly Cancel Create saved search Sign in Sign up {...