The fusion of dynamic 2D/3D multi-sensor data has not been extensively explored in the vision community. MSF 2017 workshop will encourage interdisciplinary interaction and collaboration from computer vision, remote sensing, robotics and photogrammetry communities, that will serve as a forum for research groups from academia and industry. There has been ever increasing amount of multi-sensory data collections, e.g. KITTI benchmark (stereo+laser), from different platforms, such as autonomous vehicles, surveillance cameras, UAVs, planes and satellites. With emphasis on multi-sensory data, we hope this workshop will foster new research direction in the computer vision community. The event is organized with ISPRS WG II/5 "Dynamic Scene Analysis". Submissions are invited from all areas of computer vision and image analysis relevant for, or applied to, scene understanding. This workshop will focus on multi-sensory dynamic spatial information fusion from stereo sequences, visual and infared sequences, video and lidar sequences, stereo and laser sequences, etc. Indoor applications are also welcome to submit.
Topics of interest include, but are not limited to:
Multi-modal deep learning
Image registration
Indoor/outdoor scene understanding
Security/surveillance
Robot/drone navigation
Object detection and tracking
Action recognition
Pose estimation
Multi-modal sensor fusion
Multi-scale fusion
Low-level sensory data processing
3D scanning sensors, laser and lidar systems
3D object recognition and classification
Large-scale sensing
Applications to robotics
10月23日
2017
会议日期
注册截止日期
留言