主题:SC-DepthV3-动态场景中的自监督单目深度估计 嘉宾:牛津大学博士后研究员 边佳旺 地点:TechBeat人工智能社区 Talk大纲 1、SC-Depth:解决视频中深度估计尺度不一致的问题 2、SC-DepthV2:解决室内场景中图像旋转造成的困难 3、SC-DepthV3:解决动态场景中动态物体造成的训练困难 Talk·预习资料 文章链接:https://a...
在损失函数的设置上,除了之前的几何一致性损失、光度损失外,SC-DepthV3还提出了边缘感知平滑损失来正则化预测的深度图。 在具体的评估上,SC-DepthV3在DDAD、BONN、TUM、KITTI、NYUv2和IBims-1这六个数据集进行了大量实验,定性结果显示SC-DepthV3在动态环境中具有极强的鲁棒性。定量结果也说明了SC-DepthV3在动态...
In the SC-DepthV3 (ArXiv 2022), we propose a robust learning framework for accurate and sharp monocular depth estimation in (highly) dynamic scenes. As the photometric loss, which is the main loss in the self-supervised methods, is not valid in dynamic object regions and occlusion, previous...
It is integrated into SC-DepthV1 and jointly trained with self-supervised losses, greatly boosting the performance. In the SC-DepthV3 (ArXiv 2022), we propose a robust learning framework for accurate and sharp monocular depth estimation in (highly) dynamic scenes. As the photometric loss, ...
In the SC-DepthV3 (ArXiv 2022), we propose a robust learning framework for accurate and sharp monocular depth estimation in (highly) dynamic scenes. As the photometric loss, which is the main loss in the self-supervised methods, is not valid in dynamic object regions and occlusion, previous...
In the SC-DepthV3 (ArXiv 2022), we propose a robust learning framework for accurate and sharp monocular depth estimation in (highly) dynamic scenes. As the photometric loss, which is the main loss in the self-supervised methods, is not valid in dynamic object regions and occlusion, previous...
In the SC-DepthV3 (ArXiv 2022), we propose a robust learning framework for accurate and sharp monocular depth estimation in (highly) dynamic scenes. As the photometric loss, which is the main loss in the self-supervised methods, is not valid in dynamic object regions and occlusion, previous...
In the SC-DepthV3 (TPAMI 2023), we propose a robust learning framework for accurate and sharp monocular depth estimation in (highly) dynamic scenes. As the photometric loss, which is the main loss in the self-supervised methods, is not valid in dynamic object regions and occlusion, previous...
In the SC-DepthV3 (ArXiv 2022), we propose a robust learning framework for accurate and sharp monocular depth estimation in (highly) dynamic scenes. As the photometric loss, which is the main loss in the self-supervised methods, is not valid in dynamic object regions and occlusion, previous...