Ang Li

I am currently a MSCS student at Stanford University. Previously I was an undergraduate student major in computer science at UC San Diego. I was fortunate to be advised by Prof. Hao Su. I had great time working at Hillbot as a research engineer intern during 2024's summer.

I am broadly interested computer vision, computer graphics and robotics.

CV  /  Email  /  GitHub

profile photo
Publication
SpaRP: Fast 3D Object Reconstruction and Pose Estimation from Sparse Views
Chao Xu, Ang Li, Linghao Chen, Yulin Liu, Ruoxi Shi, Minghua Liu*, Hao Su*
ECCV, 2024
Project Page / arXiv / Demo

While many single-image-to-3D methods have yielded visually appealing outcomes, they often lack sufficient controllability and tend to produce hallucinated regions that may not align with users' expectations. In this paper, we explore an important scenario in which the input consists of one or a few unposed 2D images of a single object, with little or no overlap. We propose a novel method, SpaRP, to reconstruct a 3D textured mesh and estimate the relative camera poses for these sparse-view images.

Close the Optical Sensing Domain Gap by Physics-Grounded Active Stereo Sensor Simulation
Xiaoshuai Zhang, Rui Chen, Ang Li, Fanbo Xiang, Yuzhe Qin, Jiayuan Gu, Zhan Ling, Minghua Liu, Peiyu Zeng, Songfang Han, Zhiao Huang, Tongzhou Mu, Jing Xu*, Hao Su*
T-RO, 2023
Project Page / arXiv

We lower the sim-to-real gap of simulated depth and real active stereovision depth sensors, by designing a fully physics-grounded pipeline. Perception and RL methods trained in simulation can transfer well to the real world without any fine-tuning. It can also estimate the algorithm performance in the real world, largely reducing human effort of algorithm evaluation.

Project
SAPIEN: A SimulAted Part-based Interactive ENvironment
Homepage / GitHub

SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects. It enables various robotic vision and interaction tasks that require detailed part-level understanding. I helped maintain and develop features for the library.

SimSense: A Real-Time Depth Sensor Simulator
GitHub

SimSense is a GPU-accelerated depth sensor simulator for python, implemented with CUDA. Based on semi-global matching, SimSense encapsulated various algorithms to compute depth from a pair of stereo images. It can achieve over 250 FPS whereas usual CPU implementations can hardly achieve 1 fps. This library has been integrated into the open-source simulation environment SAPIEN.

Teaching

Instructional Assistant for CSE 152A: Introduction to Computer Vision of 2022 Fall at UCSD. Instructor: Manmohan Chandraker


Modified from Jon Barron's personal website.