All posts by aislab_webstaff

Mr. Yasuyuki Fujii won the “Young Scientist Lecture Award” at RSJ 2022

We are delighted and proud to announce that, after a heat competition with other wonderful researchers, Mr. Fujii from AIS Lab has won the “Young Scientist Lecture Award” at the 40th Annual Conference of the Robotics Society of Japan.

Let’s congratulate him again on this achievement. We are expecting even more excellent work from you in the future, Fujii-san!

The ceremony award happened at the final day of RSJ 2022

A paper on 3D self-location estimation for indoor use has been published.

Thuan Bui Bach’s paper has been published in ISPRS Journal of Photogrammetry and Remote Sensing, a very influential top journal with Impact Factor 8.979 (as of 2022). The paper is about 3D self-position estimation for indoor use using 2D camera data, and the proposed method is proven to have higher accuracy and performance than existing methods through comparative experiments. The source code is also available to the public, so those interested should see the information below.

Translated with www.DeepL.com/Translator (free version)

FeatLoc: Absolute pose regressor for indoor 2D sparse features with simplistic view synthesizing
Thuan Bui Bach, Tuan Tran Dinh, Joo-Ho Lee
ISPRS Journal of Photogrammetry and Remote Sensing, Volume 189, 2022, Pages 50-62, ISSN 0924-2716,
https://doi.org/10.1016/j.isprsjprs.2022.04.021.

Abstract: Precise localization using visual sensors is a fundamental requirement in many applications, including robotics, augmented reality, and autonomous systems. Traditionally, the localization problem has been tackled by leveraging 3D-geometry registering approaches. Recently, end-to-end regressor strategies using deep convolutional neural networks have achieved impressive performance, but they do not achieve the same performance as 3D structure-based methods. To some extent, this problem has been tackled by leveraging the beneficial properties of sequential images or geometric constraints. However, these approaches can only achieve a slight improvement. In this work, we address this problem for indoor scenarios, and we argue that regressing the camera pose using sparse feature descriptors could significantly improve the pose regressor performance compared with deep single-feature-vector representation. We propose a novel approach that can directly consume sparse feature descriptors to regress the camera pose effectively. More importantly, we propose a simplistic data augmentation procedure to exploit the sparse descriptors of unseen poses, leading to a remarkable enhancement in the generalization performance. Lastly, we present an extensive evaluation of our method on publicly available indoor datasets. Our FeatLoc achieves 22% and 40% improvements in translation errors on 7-Scenes and 12-Scenes relatively, compared with recent state-of-the-art absolute pose regression-based approaches. Our codes are released at https://github.com/ais-lab/FeatLoc.
Keywords: Visual localization; Sparse features; Absolute pose regression

Survey paper on DNN and gait

Yume Matsushita’s survey paper has been published in Oxford Academy’s Journal of Computational Design and Engineering, a 34-page monograph that covers all recent gait research for medical purposes using machine learning. This is a paper that allows medical professionals and machine learning researchers to come together under the keyword of gait. It is Open Access, so anyone can read it.

Recent use of deep learning techniques in clinical applications based on gait: a survey