Rohan Panicker

I'm a graduate student at the University of Washington, Seattle.

Email  |  Bio  |  CV  |  Scholar  |  Github  |  LinkedIn

profile photo

Research

My interests are centered around imitation learning, out of distribution detection, and control theory for Robotics. My objective is to develop adaptive robots capable of navigating and performing high level tasks in complex environments effectively.

Latest work

Dynamics Models in the Aggressive Off-Road Driving Regime
Tyler Han, Sidharth Talia, Rohan Panicker, Preet Shah, Neel Jawale, Byron Boots,

Workshop on Resilient Off-Road Autonomy, ICRA, 2024

arXiv
Quad2Biped: Teaching a Quadruped to Perform a Handstand Using Reinforcement Learning

The purpose of this project is to venture from locomotion towards loco-manipulation tasks. Learning a high level task such as hand stand is a great starting point. Trained 1024 agents in parallel using Deep Reinforcement Learning.

Hierarchical Reinforcement Learning in tabular settings

Purpose of this project is to develop a hierarchical reinforcement learning framework using the MiniGrid environment. This involves training a high level planner and a low level policy to effectively navigate to a goal while solving challenges like unlocking doors with keys.

Old work (2017-2020)

Sensor fusion between IMU and 2D LiDAR Odometry based on NDT-ICP algorithm for Real-Time Indoor 3D Mapping
Rohan Panicker,

In this paper, we fuse data from an Inertial Measurement Unit (IMU) and a 2D Light Detection and Ranging (LiDAR) with the help of an Extended Kalman Filter (EKF) for producing a 3D map of an indoor environment.

techrXiv
Voice-controlled upper body exoskeleton
Shivam Tripathy, Rohan Panicker, Rohan Panicker, Shubh Shrey, Rutvik Naik, S S Pachpore,

This paper is about designing an upper body exoskeleton and utilizing a arduino supported voice recognition module for translating commands into desired actuations.

arXiv
Custom PTZ for tinyML

This project is about building a custom pan-tilt-zoom setup supporting object detection and tracking. Able to recognize objects by training a model using the X-Cube AI library. Tracking of the objects is done using Lucas-Kanade optical flow.