Hochul Hwang cv scholar
I am a Ph.D. candidate at UMass Amherst developing guide dog robots for visually impaired individuals. My research spans robot perception, planning, and human-robot interaction.
Perception & Planning â Building vision-based navigation systems using foundation models for safe, long-term autonomy.
Human-Robot Interaction â Investigating how users interact with assistive robots through field studies, informing design for real-world deployment.
Advised by Donghyun Kim, co-advised by Ivan Lee and Joydeep Biswas.
news
| Jan 31, 2026 | Two papers (Quiet Locomotion Control and GuideTWSI) submitted to ICRAâ26 got accepted! |
|---|---|
| Dec 8, 2025 | Our GuideNav: vision-only visual teach-and-repeat paper submitted to HRIâ26 got accepted! |
| Sep 30, 2025 | I started my internship at Glidance as a Robotics & ML Intern! |
| Jun 27, 2024 | đ Our URâ24 paper has been nominated as Finalist (top 9 papers of all submissions)! |
| Apr 27, 2024 | đ Our CHIâ24 paper has won the Best Paper Award (top 1% of all submissions)! |
publications
-
GuideNav: User-Informed Development of a Vision-Only Robotic Navigation Assistant For Blind TravelersHochul Hwang, Soowan Yang, Jahir S Monon, and 4 more authorsHRI 2026While commendable progress has been made in user-centric research on mobile assistive systems for blind and low-vision (BLV) individuals, references that directly inform robot navigation design remain rare. To bridge this gap, we conducted a comprehensive human study involving interviews with 26 guide dog handlers, four white cane users, nine guide dog trainers, and one O&M trainer, along with 15+ hours of observing guide dog-assisted walking. After de-identification, we open-sourced the dataset to promote human-centered development and informed decision-making for assistive systems for BLV people. Building on insights from this formative study, we developed GuideNav, a vision-only, teach-and-repeat navigation system. Inspired by how guide dogs are trained and assist their handlers, GuideNav autonomously repeats a path demonstrated by a sighted person using a robot. Specifically, the system constructs a topological representation of the taught route, integrates visual place recognition with temporal filtering, and employs a relative pose estimator to compute navigation actionsâall without relying on costly, heavy, power-hungry sensors such as LiDAR. In field tests, GuideNav consistently achieved kilometer-scale route following across five outdoor environments, maintaining reliability despite noticeable scene variations between teach and repeat runs. A user study with 3 guide dog handlers and 1 guide dog trainer further confirmed the systemâs feasibility, marking (to our knowledge) the first demonstration of a quadruped mobile system retrieving a path in a manner comparable to guide dogs.
@article{hwang2026guidenav, title = {GuideNav: User-Informed Development of a Vision-Only Robotic Navigation Assistant For Blind Travelers}, author = {Hwang, Hochul and Yang, Soowan and Monon, Jahir S and Giudice, Nicholas A and Lee, Sunghoon I and Biswas, Joydeep and Kim, Donghyun}, journal = {HRI}, year = {2026}, } -
Shangqun Yu, Hochul Hwang, Trung M Dang, and 4 more authorsICRA 2026A quadruped robot is a promising system that can offer assistance comparable to that of dog guides due to its similar form factor. However, various challenges remain in making these robots a reliable option for blind and low-vision (BLV) individuals. Among these challenges, noise and jerky motion during walking are critical drawbacks of existing quadruped robots. While these issues have largely been overlooked in guide dog robot research, our interviews with guide dog handlers and trainers revealed that acoustic and physical disturbances can be particularly disruptive for BLV individuals, who rely heavily on environmental sounds for navigation. To address these issues, we developed a novel walking controller for slow stepping and smooth foot swing/contact while maintaining human walking speed, as well as robust and stable balance control. The controller integrates with a perception system to facilitate locomotion over non-flat terrains, such as stairs. Our controller was extensively tested on the Unitree Go1 robot and, when compared with other control methods, demonstrated significant noise reduction â half of the default locomotion controller. In this study, we adopt a mixed-methods approach to evaluate its usability with BLV individuals. In our indoor walking experiments, participants compared our controller to the robotâs default controller. Results demonstrated superior acceptance of our controller, highlighting its potential to improve the user experience of guide dog robots.
@article{yu2026locomotion, title = {Human-Centered Development of Guide Dog Robots: Quiet and Stable Locomotion Control}, author = {Yu, Shangqun and Hwang, Hochul and Dang, Trung M and Biswas, Joydeep and Giudice, Nicholas A and Lee, Sunghoon Ivan and Kim, Donghyun}, journal = {ICRA}, year = {2026}, video = {https://youtu.be/8-pz_8Hqe6s}, } -
AVA in Action: Developing a Guide Dog Robot for Blind and Low-Vision PeopleHochul Hwang, Krisha Adhikari, Parth Goel, and 11 more authorsIn CVPR Workshop on AVA 2025@inproceedings{hwang2025ava, title = {AVA in Action: Developing a Guide Dog Robot for Blind and Low-Vision People}, author = {Hwang, Hochul and Adhikari, Krisha and Goel, Parth and Nguyen, Anh and Shodhaka, Satya and Yu, Shangqun and Dang, Trung M and Suzuki, Ken and Chebly, Georges and White, Peter and Biswas, Joydeep and Giudice, Nicholas A and Lee, Sunghoon I and Kim, Donghyun}, booktitle = {CVPR Workshop on AVA}, year = {2025}, } -
Hochul Hwang, Ken Suzuki, Nicholas A Giudice, and 3 more authorsASSETS UrbanAccess Workshop 2024While guide dogs offer essential mobility assistance, their high cost, limited availability, and care requirements make them inaccessible to most blind or low vision (BLV) individuals. Recent advances in quadruped robots provide a scalable solution for mobility assistance, but many current designs fail to meet real-world needs due to a lack of understanding of handler and guide dog interactions. In this paper, we share lessons learned from developing a human-centered guide dog robot, addressing challenges such as optimal hardware design, robust navigation, and informative scene description for user adoption. By conducting semi-structured interviews and hu- man experiments with BLV individuals, guide-dog handlers, and trainers, we identified key design principles to improve safety, trust, and usability in robotic mobility aids. Our findings lay the building blocks for future development of guide dog robots, ultimately enhancing independence and quality of life for BLV individuals.
-
Hochul Hwang, Krisha Adhikari, Satya Shodhaka, and 1 more authorRiTA 2024Robotic mobility aids for blind and low-vision (BLV) individuals rely heavily on deep learning-based vision models specialized for various navigational tasks. However, the performance of these models is often constrained by the availability and diversity of real-world datasets, which are challenging to collect in sufficient quantities for different tasks. In this study, we investigate the effectiveness of synthetic data, generated using Unreal Engine 4, for training robust vision models for this safety-critical application. Our findings demonstrate that synthetic data can enhance model performance across multiple tasks, showcasing both its potential and its limitations when compared to real-world data. We offer valuable insights into optimizing synthetic data generation for developing robotic mobility aids. Additionally, we publicly release our generated synthetic dataset to support ongoing research in assistive technologies for BLV individuals, available at https://hchlhwang.github.io/SToP.
-
đ Best Paper FinalistIs it safe to cross? Interpretable Risk Assessment with GPT-4V for Safety-Aware Street CrossingHochul Hwang, Sunjae Kwon, Yekyung Kim, and 1 more authorUR 2024Safely navigating street intersections is a complex challenge for blind and low-vision individuals, as it requires a nuanced understanding of the surrounding context â a task heavily reliant on visual cues. Traditional methods for assisting in this decision-making process often fall short, lacking the ability to provide a comprehensive scene analysis and safety level. This paper introduces an innovative approach that leverages large multimodal models (LMMs) to interpret complex street crossing scenes, offering a potential advancement over conventional traffic signal recognition techniques. By generating a safety score and scene description in natural language, our method supports safe decision-making for the blind and low-vision individuals. We collected crosswalk intersection data that contains multiview egocentric images captured by a quadruped robot and annotated the images with corresponding safety scores based on our predefined safety score categorization. Grounded on the visual knowledge, extracted from images, and text prompt, we evaluate a large multimodal model for safety score prediction and scene description. Our findings highlight the reasoning and safety score prediction capabilities of a LMM, activated by various prompts, as a pathway to developing a trustworthy system, crucial for applications requiring reliable decision-making support.
-
đ Best PaperTowards Robotic Companions: Understanding Handler-Guide Dog Interactions for Informed Guide Dog Robot DesignHochul Hwang, Hee-Tae Jung, Nicholas A Giudice, and 3 more authorsCHI 2024Dog guides are favored by blind and low-vision (BLV) individuals for their ability to enhance independence and confidence by reducing safety concerns and increasing navigation efficiency compared to traditional mobility aids. However, only a relatively small proportion of BLV people work with dog guides due to their limited availability and associated maintenance responsibilities. There is considerable recent interest in addressing this challenge by developing legged guide dog robots. This study was designed to determine critical aspects of the handler-guide dog interaction and better understand handler needs to inform guide dog robot development. We conducted semi-structured interviews and observation sessions with 23 dog guide handlers and 5 trainers. Thematic analysis revealed critical limitations in guide dog work, desired personalization in handler-guide dog interaction, and important perspectives on future guide dog robots. Grounded on these findings, we discuss pivotal design insights for guide dog robots aimed for adoption within the BLV community.
-
System Configuration and Navigation of a Guide Dog Robot: Toward Animal Guide Dog-Level Guiding WorkHochul Hwang, Tim Xia, Ibrahima Keita, and 4 more authorsICRA 2023A robot guide dog has compelling advantages over animal guide dogs for its cost-effectiveness, potential for mass production, and low maintenance burden. However, despite the long history of guide dog robot research, previous studies were conducted with little or no consideration of how the guide dog handler and the guide dog work as a team for navigation. To develop a robotic guiding system that is genuinely beneficial to blind or visually impaired individuals, we performed qualitative research, including interviews with guide dog handlers and trainers and first-hand blindfold walking experiences with various guide dogs. Grounded on the facts learned from vivid experience and interviews, we build a collaborative indoor navigation scheme for a guide dog robot that includes preferred features such as speed and directional control. For collaborative navigation, we propose a semantic- aware local path planner that enables safe and efficient guiding work by utilizing semantic information about the environment and considering the handlerâs position and directional cues to determine the collision-free path. We evaluate our integrated robotic system by testing guide blindfold walking in indoor settings and demonstrate guide dog-like navigation behavior by avoiding obstacles at typical gait speed (0.7 m/s).
@article{hwang2023system, title = {System Configuration and Navigation of a Guide Dog Robot: Toward Animal Guide Dog-Level Guiding Work}, author = {Hwang, Hochul and Xia, Tim and Keita, Ibrahima and Suzuki, Ken and Biswas, Joydeep and Lee, Sunghoon I and Kim, Donghyun}, journal = {ICRA}, year = {2023}, video = {https://www.youtube.com/watch?v=9Y7Gvbw0qr4}, } -
Shifan Zhu, Nisal Perera, Shangqun Yu, and 2 more authorsIROS IPPC Workshop 2023As robots increase in agility and encounter fast- moving objects, dynamic object detection and avoidance become notably challenging. Traditional RGB cameras, burdened by motion blur and high latency, often act as the bottleneck. Event cameras have recently emerged as a promising solution for the challenges related to rapid movement. In this paper, we introduce a dynamic object avoidance framework that integrates both event and RGBD cameras. Specifically, this framework first estimates and compensates for the eventâs motion to detect dynamic objects. Subsequently, depth data is combined to derive a 3D trajectory. When initiating from a static state, the robot adjusts its height based on the predicted collision point to avoid the dynamic obstacle. Through real-world experiments with the Mini-Cheetah, our approach successfully circumvents dynamic objects at speeds up to 5 m/s, achieving an 83% success rate. Supplemental video:
@article{zhu2023dynamic, title = {Dynamic Object Avoidance using Event-Data for a Quadruped Robot}, author = {Zhu, Shifan and Perera, Nisal and Yu, Shangqun and Hwang, Hochul and Kim, Donghyun}, journal = {IROS IPPC Workshop}, year = {2023}, video = {https://www.youtube.com/watch?v=wEPvynkVlLA}, } -
Highly sensitive capacitive pressure sensors over a wide pressure range enabled by the hybrid responses of a highly porous nanocompositeKyoung-Ho Ha, Weiyi Zhang, Hongwoo Jang, and 5 more authorsAdvanced Materials 2021Past research aimed at increasing the sensitivity of capacitive pressure sensors has mostly focused on developing dielectric layers with surface/porous structures or higher dielectric constants. However, such strategies have only been effective in improving sensitivities at low pressure ranges (e.g., up to 3 kPa). To overcome this well-known obstacle, herein, a flexible hybrid-response pressure sensor (HRPS) composed of an electrically conductive porous nanocomposite (PNC) laminated with an ultrathin dielectric layer is devised. Using a nickel foam template, the PNC is fabricated with carbon nanotubes (CNTs)-doped Ecoflex to be 86% porous and electrically conductive. The PNC exhibits hybrid piezoresistive and piezocapacitive responses, resulting in significantly enhanced sensitivities (i.e., more than 400%) over wide pressure ranges, from 3.13 kPaâ1 within 0â1 kPa to 0.43 kPaâ1 within 30â50 kPa. The effect of the hybrid responses is differentiated from the effect of porosity or high dielectric constants by comparing the HRPS with its purely piezocapacitive counterparts. Fundamental understanding of the HRPS and the prediction of optimal CNT doping are achieved through simplified analytical models. The HRPS is able to measure pressures from as subtle as the temporal arterial pulse to as large as footsteps.
@article{ha2021highly, title = {Highly sensitive capacitive pressure sensors over a wide pressure range enabled by the hybrid responses of a highly porous nanocomposite}, author = {Ha, Kyoung-Ho and Zhang, Weiyi and Jang, Hongwoo and Kang, Seungmin and Wang, Liu and Tan, Philip and Hwang, Hochul and Lu, Nanshu}, journal = {Advanced Materials}, year = {2021}, } -
ElderSim: A Synthetic Data Generation Platform for Human Action Recognition in Eldercare ApplicationsHochul Hwang, Cheongjae Jang, Geonwoo Park, and 2 more authorsIEEE Access 2021To train deep learning models for vision-based action recognition of eldersâ daily activities, we need large-scale activity datasets acquired under various daily living environments and conditions. However, most public datasets used in human action recognition either differ from or have limited coverage of eldersâ activities in many aspects, making it challenging to recognize eldersâ daily activities well by only utilizing existing datasets. Recently, such limitations of available datasets have actively been compensated by generating synthetic data from realistic simulation environments and using those data to train deep learning models. In this paper, based on these ideas we develop ElderSim, an action simulation platform that can generate synthetic data on eldersâ daily activities. For 55 kinds of frequent daily activities of the elders, ElderSim generates realistic motions of synthetic characters with various adjustable data-generating options and provides different output modalities including RGB videos, two- and three-dimensional skeleton trajectories. We then generate KIST SynADL, a large-scale synthetic dataset of eldersâ activities of daily living, from ElderSim and use the data in addition to real datasets to train three state-of-the-art human action recognition models. From the experiments following several newly proposed scenarios that assume different real and synthetic dataset configurations for training, we observe a noticeable performance improvement by augmenting our synthetic data. We also offer guidance with insights for the effective utilization of synthetic data to help recognize eldersâ daily activities.
@article{9324837, author = {Hwang, Hochul and Jang, Cheongjae and Park, Geonwoo and Cho, Junghyun and Kim, Ig-Jae}, journal = {IEEE Access}, year = {2021}, title = {ElderSim: A Synthetic Data Generation Platform for Human Action Recognition in Eldercare Applications}, pages = {1-1}, doi = {10.1109/ACCESS.2021.3051842}, url = {https://ieeexplore.ieee.org/document/9324837}, } -
Donghyun Kim, Jaemin Lee, Junhyeok Ahn, and 3 more authorsIROS 2018In this paper, we devise methods for the multiobjective control of humanoid robots, a.k.a. prioritized whole-body controllers, that achieve efficiency and robustness in the algorithmic computations. We use a form of whole-body controllers that is very general via incorporating centroidal momentum dynamics, operational task priorities, contact reaction forces, and internal force constraints. First, we achieve efficiency by solving a quadratic program that only involves the floating base dynamics and the reaction forces. Second, we achieve computational robustness by relaxing task accelerations such that they comply with friction cone constraints. Finally, we incorporate methods for smooth contact transitions to enhance the control of dynamic locomotion behaviors. The proposed methods are demonstrated both in simulation and in real experiments using a passive-ankle bipedal robot.
@article{kim2018computationally, title = {Computationally-robust and efficient prioritized whole-body controller with contact constraints}, author = {Kim, Donghyun and Lee, Jaemin and Ahn, Junhyeok and Campbell, Orion and Hwang, Hochul and Sentis, Luis}, journal = {IROS}, year = {2018}, video = {https://www.youtube.com/watch?v=3uc_p-6tzLg}, } -
Control scheme and uncertainty considerations for dynamic balancing of passive-ankled bipeds and full humanoidsDonghyun Kim, Steven Jens Jorgensen, Hochul Hwang, and 1 more authorHumanoids 2018We propose a methodology for dynamically balancing passive-ankled bipeds and full humanoids. As dynamic locomotion without ankle-actuation is more difficult than with actuated feet, our control scheme adopts an efficient whole-body controller that combines inverse kinematics, contact-consistent feed-forward torques, and low-level motor position controllers. To understand real-world sensing and controller requirements, we perform an uncertainty analysis on the linear-inverted-pendulum (LIP)-based footstep planner. This enables us to identify necessary hardware and control refinements to demonstrate that our controller can achieve long-term unsupported dynamic balancing on our series-elastic biped, Mercury. Through simulations, we also demonstrate that our control scheme for dynamic balancing with passive-ankles is applicable to full humanoid robots.
@article{kim2018control, title = {Control scheme and uncertainty considerations for dynamic balancing of passive-ankled bipeds and full humanoids}, author = {Kim, Donghyun and Jorgensen, Steven Jens and Hwang, Hochul and Sentis, Luis}, journal = {Humanoids}, year = {2018}, organization = {IEEE}, } -
Junghyun Cho, Ig Jae Kim, and Hochul HwangUS Patent 2024Embodiments relate to a human behavior recognition system using hierarchical class learning considering safety, the human behavior recognition system including a behavior class definer configured to form a plurality of behavior classes by sub-setting a plurality of images each including a subject according to pre-designated behaviors and assign a behavior label to the plurality of images, a safety class definer configured to calculate a safety index for the plurality of images, form a plurality of safety classes by sub-setting the plurality of images based on the safety index, and additionally assign a safety label to the plurality of images, and a trainer configured to train a human recognition model by using the plurality of images defined as hierarchical classes by assigning the behavior label and the safety label as training images.
@article{cho2024human, title = {Human behavior recognition system and method using hierachical class learning considering safety}, author = {Cho, Junghyun and Kim, Ig Jae and Hwang, Hochul}, journal = {US Patent}, year = {2024}, publisher = {Google Patents}, note = {US Patent App. 17/565,453}, }