Sachini Herath

I am a PhD student in School of Computing Science at Simon Fraser University supervised by Prof. Yasutaka Furukawa.



Neural Inertial Localization


This paper proposes the inertial localization problem, the task of estimating the absolute location from a sequence of inertial sensor measurements. We developed a solution, dubbed neural inertial localization NILoc which 1) uses a neural inertial navigation technique to turn inertial sensor history to a sequence of velocity vectors; then 2) employs a transformer-based neural architecture to find the device location from the sequence of velocities.


Authors: Sachini Herath, David Caruso, Chen Liu, Yufan Chen, Yasutaka Furukawa

Fusion-DHL: WiFi, IMU, and Floorplan Fusion for Dense History of Locations in Indoor Environments


The paper proposes a multi-modal sensor fusion algorithm that fuses WiFi, IMU, and floorplan information to infer an accurate and dense location history in indoor environments. The algorithm uses 1) an inertial navigation algorithm to estimate a relative motion trajectory from IMU sensor data; 2) a WiFi-based localization API in industry to obtain positional constraints and geo-localize the trajectory; and 3) a convolutional neural network to refine the location history to be consistent with the floorplan.


Authors: Sachini Herath, Saghar Irandoust, Bowen Chen, Yiming Qian, Pyojin Kim, Yasutaka Furukawa

RoNIN: Robust Neural Inertial Navigation in the Wild

MSc. Thesis Research Project. [Website] [Code]

The research focus on data-driven inertial navigation, where the task is the estimation of positions and orientations of a moving subject from a sequence of IMU sensor measurements. We present 1) a new benchmark of IMU sensor data and ground-truth 3D trajectories under natural human motions; 2) neural inertial navigation architectures, making significant improvements for challenging motion cases; and 3) comprehensive evaluations of the competing methods and datasets.


Authors: Hang Yan*, Sachini Herath*, Prof. Yasutaka Furukawa (* indicates equal contribution)

Virtual Reality Content Generation and Rendering Framework

Final year capstone project for BSc. Engineering degree. [Blog]

A content generation and rendering framework for freely navigable virtual reality environments. The environment was created using photospheres and intermediate views were generated using features from neighbouring images. Segway inspired gesture based input system was introduced for navigation.


Authors: Sachini Herath, Vipula Dissanayake, Sanka Rasnayaka, Sachith Seneviratne, Rajith Vidanaarachchi, Dr. Chandana Gamage

Other Publications
  • Qian, Y., Yan, H., Herath, S., Kim, P., and Furukawa, Y., 2022, Single User WiFi Structure from Motion in the Wild, ICRA 2022.
  • Rasoulidanesh, M., Yadav, S., Herath, S., Vaghei, Y., Payandeh, S., 2019. Deep Attention Models for Human Tracking Using RGBD. Sensors 19(4).
  • Dissanayake, D., Perera, T., Elladeniya, C., Dissanayake, K., Herath, S. and Perera, I., 2018. Identifying the Learning Style of Students in MOOCs Using Video Interactions, International Journal of Information and Education Technology vol. 8, no. 3.
  • Hewawalpita, Herath, S., S., Perera, I. and Meedeniya, D., 2018 Effective Learning Content Offering in MOOCs with Virtual Reality-An Exploratory Study on Learner Experience. Journal of Universal Computer Science, 24(2).

Template credit: Colorlib