Alexander Gao

I am a second-year PhD student in Computer Science at University of Maryland, College Park, advised by Ming C. Lin in the GAMMA Lab. My research interests revolve around Inverse Graphics, Physically-Based Simulation and Scientific Computing.

Previously, I worked as an Applied Scientist at AWS Robotics. I completed my M.S. in Computer Science at New York University, where I was lucky to gain research experience working with Ken Perlin and Lerrel Pinto. I received my B.A. in Film Production from USC School of Cinematic Arts.

Email  /  CV  /  Google Scholar  /  Twitter  /  Github


   Our paper NeuPhysics is accepted to NeurIPS 2022!
   I began my internship at Google Geo AR.

profile photo
NeuPhysics: Editable Neural Geometry and Physics from Monocular Videos
NeurIPS, 2022

Alexander Gao*, Yi-Ling Qiao*, and Ming C. Lin.     

Paper / Code / Website

We present a method for learning 3D geometry and physics parameters of a dynamic scene from only a monocular RGB video input. To decouple the learning of underlying scene geometry from dynamic motion, we represent the scene as a time-invariant signed distance function (SDF) which serves as a reference frame, along with a time-conditioned deformation field. We further bridge this neural geometry representation with a differentiable physics simulator by designing a two-way conversion between the neural field and its corresponding hexahedral mesh, enabling us to estimate physics parameters from the source video by minimizing a cycle consistency loss. Our method also allows a user to interactively edit 3D objects from the source video by modifying the recovered hexahedral mesh, and propagating the operation back to the neural field representation. Experiments show that our method achieves superior mesh and video reconstruction of dynamic scenes compared to other competitive Neural Field approaches, and we provide extensive examples which demonstrate its ability to extract useful 3D representations from videos captured with consumer-grade cameras.

Simultaneous Navigation and Construction Benchmarking Environments
arXiv Preprint, 2021

Wenyu Han, Chen Feng, Haoran Wu, Alexander Gao, Armand Jordana, Dongdong Liu, Lerrel Pinto, and Ludovic Righetti.

Paper / Code / Website

We need intelligent robots for mobile construction, the process of navigating in an environment and modifying its structure according to a geometric design. In this task, a major robot vision and learning challenge is how to exactly achieve the design without GPS, due to the difficulty caused by the bi-directional coupling of accurate robot localization and navigation together with strategic environment manipulation. To stimulate the pursuit of a generic and adaptive solution, we reasonably simplify mobile construction as a partially observable Markov decision process (POMDP) in 1/2/3D grid worlds and benchmark the performance of a handcrafted policy with basic localization and planning, and state-of-the-art deep reinforcement learning (RL) methods.

*Denotes equal contribution.

This site is based on Jon Barron's template.