Kaveh Safavigerdini

I am a PhD Candidate in Computer Science at the University of Missouri, advised by Prof. K. Palaniappan at the Computational Imaging & Vision Analysis (CIVA) Lab. My research focuses on computer vision, deep learning, and their applications in robotics and scientific analysis.

Previously, I received a dual M.Sc. & B.Sc. in Dynamical & Control Systems from Sharif University of Technology and an M.Sc. in Computer & Electrical Engineering from the University of Missouri.

Email  /  CV  /  Google Scholar  /  Github  /  LinkedIn

profile photo

Bio

I am a PhD student specializing in computer vision, with a robust interdisciplinary background in control systems and algorithm optimization. This fusion of expertise enables me to innovate at the intersection of computer vision, machine learning, and dynamical systems. My doctoral work involves developing deep learning frameworks for 3D reconstruction, real-time feature tracking, and explainable AI (XAI). My skills include Computer Vision (Object Tracking, SfM, GANs), Control Theory, and programming in Pytorch, Python, and C++.


Research

My doctoral research focuses on the following areas:


Algorithms:
  • 3D Point Tracking in Monocular Videos: End-to-end deep learning framework to estimate 3D trajectories, scene geometry, and camera motion from monocular videos.
  • Coarse-to-fine feature keypoint tracks using Transformer: Transformer-based method for accurate pixel-level keypoint tracking to improve SfM and feature tracking applications.
  • Real-Time Feature Tracking in Monocular Videos: High-speed algorithms for real-time feature tracking.
  • Gram-Schmidt Feature Reduction Class Activation Map (GFR-CAM): An XAI framework to generate multiple visual explanations via feature decomposition.
  • 3D Measurement Tool for WAMI dataset: A cross-platform 3D measurement and GUI annotation tool for analyzing large photogrammetric datasets.
Applications:
  • 3D Scene Mapping for Augmented Reality and Robotics: Using a differentiable SfM framework to generate dense 3D maps for robust AR and autonomous navigation.
  • Kinematic Analysis and Shape Estimation of CNT Pillars: Applying feature tracking algorithms to analyze carbon nanotube growth rates and estimate pillar shapes.
  • Horse Lameness Detection through Video Feature Tracking: A highly accurate feature tracking pipeline for analyzing horse skeletal movement to detect lameness.

Publications

GFR-CAM: Gram-Schmidt Feature Reduction for Hierarchical CAMs
K. Safavigerdini, et al.
International Conference on Computer Vision (ICCV), 2025.

Automated Feature Tracking for Real-Time Kinematic Analysis and Shape Estimation of Carbon Nanotube Growth
K. Safavigerdini, et al.
International Conference on Computer Vision (ICCV), 2025.

Predicting Mechanical Properties of CNT Images Using MLS
K. Safavigerdini, et al.
IEEE International Conference on Image Processing (ICIP), 2023.

Creating semi-Quanta MLS CNT images using CycleGAN
K. Safavigerdini, et al.
IEEE Applied Imagery Pattern Recognition Workshop (AIPR), 2023.

CNT Forest Self-Assembly Insights from In-situ ESEM Synthesis
Surya,.., K. Safavigerdini, et al.
Carbon, 2024.

Stabilizing unstable periodic orbit of unknown fractional-order systems via adaptive delayed feedback control
Yaghooti, K. Safavigerdini, et al.
Proceedings of the Institution of Mechanical Engineers (Proc. ImechE), 2023.

Adaptive synchronization of uncertain fractional-order chaotic systems using sliding mode control techniques
Yaghooti, Siahi, K. Safavigerdini, et al.
Proceedings of the Institution of Mechanical Engineers (Proc. ImechE), 2020.


Experience

Internship Researcher May 2023 – Aug 2023
Project: Motion and Eye-Tracking Data in AR

Team Leader Oct 2018 – Oct 2019
Project: Online Monitoring of Tehran’s Air Pollution


Projects

Impact of Motion and Eye-Tracking Data in Augmented Reality Learning

We conducted a series of experiments to explore the impact of metacognitive monitoring feedback within a novel motion- and location-based augmented reality (AR) learning environment. Additionally, we examined how motion and eye-tracking data changed in response to metacognitive monitoring feedback and analyzed their relationship with students' AR learning outcomes.


Awards

Best Thesis Award for Practical Application, Sharif University, 2016