Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2 
Published in IEEE International Conference on Robotics and Automation (ICRA), 2026
ICRA 2026 Accepted. Will release the code and paper soon!
Recommended citation: ...
Download Paper
Published in IEEE International Conference on Robotics and Automation (ICRA), 2026
ICRA 2026 Accepted. Will release the code and paper soon!
Recommended citation: ...
Download Paper
Published:
This is a description of your talk, which is a markdown file that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate Thesis, Nankai University, 2024
Due to the good maneuverability of the drone swarm, it shows high efficiency in the exploration of unknown environment, which can be applied in domains such as disaster relief and forest exploration. The method can allocate exploration regions to individual drones, while considering the uneven distribution of obstacles and calculating the local environmental complexity to adjust the exploration cost. All the code has been open-sourced at https://github.com/Jiang-Yufei/Multi-agent-exploration.git
Research Project, Penn State University, 2024
We propose a self-supervised UAV trajectory planning pipeline that integrates a learning-based depth perception with differentiable trajectory optimization. A 3D cost map guides UAV behavior without expert demonstrations or human labels. Additionally, we incorporate a neural network-based time allocation strategy to improve the efficiency and optimality. The system thus combines robust learning-based perception with reliable physics-based optimization for improved generalizability and interpretability. Both simulation and real-world experiments validate our approach across various environments, demonstrating its effectiveness and robustness. Our method achieves a 31.33% improvement in position tracking error and 49.37% reduction in control effort compared to the state-of-the-art.
Research Project, Penn State University, 2025
Aerial manipulation (AM) promises to move Unmanned Aerial Vehicles (UAVs) beyond passive inspection to contact-rich tasks such as grasping, assembly, and in-situ maintenance. Most prior AM demonstrations rely on external motion capture (MoCap) and emphasize position control for coarse interactions, limiting deployability. We present a fully onboard perception–control pipeline for contact-rich AM that achieves accurate motion tracking and regulated contact wrenches without MoCap. The main components are (1) an augmented visual–inertial odometry (VIO) estimator with contact-consistency factors that activate only during interaction, tightening uncertainty around the contact frame and reducing drift, and (2) image-based visual servoing (IBVS) to mitigate perception–control coupling, together with a hybrid force–motion controller that regulates contact wrenches and lateral motion for stable contact. Experiments show that our approach closes the perception-to-wrench loop using only onboard sensing, yielding an velocity estimation improvement of 66.01\% at contact, reliable target approach, and stable force holding—pointing toward deployable, in-the-wild aerial manipulation.