I am a final-year PhD student in the Robotics Institute, School of Computer Science, Carnegie Mellon University advised by Abhinav Gupta and Shubham Tulsiani. I was recently a student researcher at Google DeepMind in Mountain View. Previously I was a visiting researcher at FAIR, Meta AI in Pittsburgh for two years during my PhD. I am engaged in the
quest for understanding intelligence by trying to simulate it. Although this quest has kept me fully occupied for the past several years, I also paint, and have a Bachelor of Arts degree in Fine Arts. Some of my paintings can be found here.
[2019] BTech in Computer Science and Engineering, IIT Kanpur (Kanpur)
News/Highlights
[2024] Open-X: Best Conference Paper Award at ICRA 2024
[2024] HOPMan: Best Paper in Robot Manipulation Finalist at ICRA 2024
[2023] RoboAgent: Outstanding Presentation Award at the Robot Learning Workshop, NeurIPS 2023
[2023] Our sample-efficient universal manipulation research was covered by TechCrunch, ACM, IEEE
[2021] Our research on safe exploration for robotics was convered by VentureBeat
If you have any questions / want to collaborate, feel free to send me an email! I am always excited to learn more by talking with people.
For junior graduate/undergraduate students: I commit 40 minutes every week for chatting about anything related to career guidance, life goals, and research
on AI and adjacent areas. If such a meeting would be helpful, feel free to email me for a 20 min. or a 40 min. slot. and include "HELLOHOMANGA" in the subject
of the email.
Research
I'm interested in developing embodied AI systems capable of helping us in the humdrum of everyday activities within messy rooms, offices, and kitchens, in a reliable, compliant, and scalable manner without requiring significant robot-specific data collection and task-specific heuristics. A major thrust of my research is on combining robot-specific data with predictive planning from diverse web videos such as YouTube clips of humans doing daily chores, for developing robust robot learning algorithms that are sample-efficient, safety-aware through constraint satisfaction, and that can scale across diverse (unseen) real-world tasks. I have eclectic research interests, and have also worked on robustness in machine learning, and improving sample efficiency, and representations in reinforcement learning.
In my research, I conduct experiments across robot embodiments for demonstrating generalization of policies to unseen tasks including those involving manipulation of completely unseen object types with novel motions. Here are some glimpses of common goal/langauge-conditioned policy deployments in unseen offices and kitchens:
Glimpse of robot deployment results from my works (Gen2Act, Track2Act, HOPMan, RoboAgent). Each robot is controlled with a single goal-conditioned policy where the goal is either an image or a language description specifying the task, and deployed in unseen offices and kitchens.
We develop an in-context action prediction assistant for daily activities. HandsOnVLM enables predicting future interaction trajectories of human hands in a scene given high-level colloquial task specifications in the form of natural language.
Casting language-conditioned manipulation as human video generation followed by closed-loop policy execution conditioned on the generated video enables solving diverse real-world tasks involving object/motion types unseen in the robot dataset.
We can train a model for embodiment-agnostic point track prediction from web videos combined with embodiment-specific residual policy learning for diverse real-world manipulation in everyday office and kitchen scenes. The resulting goal-conditioned policy can be zero-shot deployed in unseen scenarios.
We can develop a single robot manipulation agent capable of over 38 tasks across 100s of scenes, through semantic augmentations for multiplying data, and action chunking transformers for fitting the multi-modal data distribution.
Learning interaction plans from diverse passive human videos on the web, followed by translation to robotic embodiments can help develop a single goal-conditioned policy that scales to over 100 diverse tasks in unseen scenarios, including real kitchens and offices.
Learning to predict plausible hand motion trajectories from passive human videos on the web, followed by transformation of the predictions to a robot's frame of reference enables zero-shot coarse-manipulation with real-world objects.
Through effective augmentations enabled by recent advances in generative modeling, we can develop a framework for learning robust manipulation policies capable of solving multiple tasks in diverse real-world scenes.
We can enable goal-directed robot exploration in the real world by learning an affordance model to predict plausible future frames given an initial image from passive human interaction videos, in combination with self-behavior cloning for policy learning.
Empowerment along with mutual information maximization helps learn functionally relevant factors in visual model-based RL, especially in environments with complex visual distractors.
Training a critic to make conservative safety estimates by over-estimating how unsafe a particular state is, can help significantly minimize the number of catastrophic failures in constrained RL
We can learn to imitate human videos for manipulation by extracting task-agnostic keypoints to define an imitation objective that abstracts out aspects of the human/robot embodiment gap.
Combining online planning of high level skills with an amortized low level policy can improve sample-efficiency of model-based RL for solving complex tasks, and transferring across tasks with similar dynamics.
Updating the top action sequences identified by CEM through a few gradient steps helps improve sample efficiency and performance of planning in Model-based RL
Task-conditioned hypernetworks can be used to continually adapt to varying environment dynamics, with a limited replay buffer in lifelong robot learning
Keeping track of the currently reachable frontier of states, and executing a deterministic policy to reach the frontier followed by a stochastic policy beyond, can help facilitate principled exploration in RL
Introducing skip connections in the policy and Q function neural networks can improve sample efficiency of reinforcement learning algorithms across different continuous control environments
Adversarial domain adaptation can be used for training a gradient descent based planner in simulation and transferrring the learned model to a real navigation environment.
Adversarially learned inference can be generalized to incorporate multiple layers of feedback through reconstructions, self-supervision, and learned knowledge.
Adversarial Domain Adaptation appropriately incorporated in a Generative Zero Shot Learning model can help minimize domain shift and significantly enhance generalization on the unseen test classes
Auditing deep-learning models for human-interpretable specifications, prior to deployment is important in preventing unintended consequences. These specifications can be obtained by considering variations in an interpretable latent space of a generative model.
Recurrent Neural Network based Generative Adversarial Networks can learn to effectively model the latent preference trends of users in time-series recommendation.
In an analysis of ICLR 2020 and 2019 papers, we find positive correlation between releasing preprints on arXiv and acceptance rates of papers by well-known authors. For well known authors, acceptance rates for papers with arxiv preprint are higher than those without preprints released during review.
Passive website recommendations embedded in the new tab displays of browsers (that recommend based on frecency) inhibit peoples' propensity to visit diverse information sources on the internet
Spontaneous Parametric Down Conversion is used to generate entangled photon pairs. SPDC can be studies through the lens of Wave Optics by making some simplifying theoretical assumptions without compromising on empirical results. Also, a simulation for SPDC can be conveniently designed, given the assumptions.